var/home/core/zuul-output/0000755000175000017500000000000015136176662014542 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136213254015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000372034515136213112020257 0ustar corecoreJyikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p苴I_翪|mvşo#oVݏKf+ovpZjwC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*ͼ~aT(;`KZ)&@Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!fJ'3[n )ܗKj/jUSsȕD $([LH%xa1yrOI,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%'/k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b:dw>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}޹na4p9/B@Dvܫs;/f֚Znϻ-RBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/ػ޶^W|9_*[ѥ@ݾ۴9wYH3u$)Ɏ&T7nԖc>p8rLQ4q'Jd_&9H6E)}U|J-7 H`_"4xGEW}S RD%d7{k=*Fˠ)z}ʺg<*_jDUaj/@ȷĩL{k)*"jV3ľuX{!Unnj |=4H4fWuA|B\ RㅱODLOt㢚EZDǙ~ E|wj_%+d|G wIو풲'nXY/C0Ɨ4Bu42CIuq 7\.Ӱ#z?7W;o ^W3Y*#hcP,±7vC:6 g} yYO<#GY&tM%ḃ] z/˗3.lQ`(la{"xa`w'^d2xEǮnY^hGҴ+;`a פ7u2Oprq|WIOS פiBRvNomlC=_<g0 C'x$ųZRku I?2$g }98Ǧn ݼ0 ݹ -l׎K4f8,c"FĸLp|ӮMǴ\ǵ_B!t4y^[w>Xo?c\f7"BW-8[J-{c$߱a E&dq8qΛŒU8'Nd=E i\;5bI3dVq绉 _<xwB0E\zj(P,f#]yfYF`~$e3G8E (4|k'$x͐~FAuNדX~ G(Te<qւ̪H[z2|ka!췄FbM+AyTj2E"ADHoFը%OAQ5ϛ'G(EG\ kT2l;Uuh*h^EvrOwEVWդ%K<ߒ`\қGQbgXjc񳥃&]R]'h߄yUR[U#/X4_@ (j/K x:;KFS:+b{89% .|A,'i3^3#F |ylOucsc۶>uWƳi_GO)A;dp X%n?2Ƞ=K#奈yRK9J;Ts9wHrƋ8hYQ<?vx|ݧ)k[C9k:>*)$N ӷټ~}we*/ix--$052Y4%^e7eH+wWdZH)h 38vH@<5?YɅ& oIk֏Oi@|°,羪ԻA"<'s^RzC ȻqIw;Ea415qDw}c^6 .~8=>;Id˳uͺ!q_A>ɫHԽmyQxI^?gd w8lbhSw*.׷tpOIWyyI =|<6ë?I:߰[i@_(l9i2mP"~fKWG(pi AG8"F?'M<@ۯ'&ƒa{8X`QJ0J4q0&]gY^kO\FjLOלc{LY־bb&wjrMziݨi]JX8 xUg@ڝ> Oј6ۊ,(|9?|FS`R^`MY Pm*>>}wBð$1^a|K\ O k&!8onJig.ay~ؓ?Shsl4@BfS2j`F)D@0wQ"1GZ!qCjc+~C&Z!uK\yf1O8V!Ak)Nn;P3sRJ *I)ъE4}<F!8AZ,ڳ؇ X%N&%DNS BqPFoΈ$Yk'HhL]1yڪ".7.c"m g8k -t/#&y'0 4R JoΫT,a)zӨY*Yb(J[0 wZl[Eu5ѽ˱w5?OYc|NCZ{1!Rr/s3rp>no֦\ʯy gz$dě>wC{5,tXzކÛڳ$ۛRaNxRV;+FRjuɳ*y'Gl}Sng[A0(:5 &rbY tFsGɺYr%A359'V{㤏-AX+^h([jl 'O-iZzppxYJw#3MP8]-0m7@.FZRyx]TJ6|sl80AmyԖh[Ks9Wfk9|RǺU|W8,aN}Ъ:gk2ڑo\؜nS^uגr VPO#Z1WƂιeч7խiJIcr՜ykY}ۑ(E·òYifr] ] Ow"2fwM rU~DGX8W9ޯf%y nTV e?(P p݅scjuw^ mrUN,YEZU.Y"oQ\UbE R4Iw|N7֮_~ʧL*:otb4z!.dK$ehDO7$ydrjdѦDwtn^].r塚nw;W,eY()igWI׌e\TGd *zዼz4H%0Oc1UYid.$"p/3eevN>BM` ~x6?z8Uu)T4DeXY{Sܸ'=M=]% 1,YWv5i\'2Ձ& H]Ŷ0y\Ot3 lH\ZL?nZa*HU'Uw{̲rZb1ժ&n=#Y܀?WZ\ʡEWM7HUy'fe#VܒoWaGVaI"yŌ]IZ"I4@uT+7Jk-{w24l=lv{Cy7EN(=>1JU3Py=LRlHf֎cCv(d"ۑ A==؝l/l7YwG2wKzYfkJ7(TfxOP/=ەF.ۑNl: h_L3Q{ti@u4d&ϰ ۹7ߑ$ln`ܟ 3mc8ah`N4b -N݀tcv?Q_"`d{2=ZM0uMҴps柀ٕn$/(3 @y}ߕ%zgcCۗ}-8w ܅1Ь;^["z"4lGq;5s<ly/>骏,ijxܷ!b,MZ,MAB ܳYH$O0}^t&&=}N",w{̛r c|>YAr:_rݧaۼ"gGz1{n]K><#Q'82K?sN<x?zNٔ#=ߡ96ڑ&PgGbW~zo<-IA,8(zGv] o㶖+[lQlf .$'ؖW`CR28c*cؖ͢D;|=kR#Ծ7< cT}ƴ]GJb,={y"q|w1o; #'(:'ci&qJ}qpcτ_A#xL')lC{Ħ߳5% gJ>{J݇ 'b14.ͧjc`cxU~}GIĶwv懥{]HGCЍ=O׍XҎHGS%ܫPhhAs ;@ ;!!g1Dw@=g7v|ˢB(n'(x$} h\JGEMَJC1rhGވP44ݙIf/" xB86VPܣVu>8>"# !|&&3x)b{Tö ,`v17o}J5=[pӮ >mwvd(Μ)D2h~p#:K ߱+ ] e4`a"+ʎmwx3}Gֲeyq}Y#7B][k ш61PŽb3&{F_' AKLXi*Z0 *Ed-W9f 7#+Ax5|twRQ5"xhȠZPH^b'bV w<@w*:S@Ŕ-B? \D3/K%|D4$D"1tgA=2) 꼊{-g"YtKN@G:9рOdd%ŘA_Ow35n&B4= SiЇYr3(TirCE Uj,SfW3zyŒ[׈ ظA3|je.n b۲=ӲMbkGV@ipo!\%5)5Q+llZ=\N0rDQ 2q>6@+n S-9< ?<Dp Eo/6(gu1R^R^0>ޮz av=\Lj&%o˲YY/jwBk1K~a{ĤQG?fbjM#$L (`E f's l'z6VQj7EGcިj} <7t_f_}2"ƃ̒)q^hx~|qTFR+EZݾ;V`==46[oV13J܉@8<="7G@޾@ϲ|xs4+o^}9]Tp2%)Aҡ$Yf .Y L$FuXcQԣJ"`zB=8ztܤ[h4bym3};AeG4ߤ1 5mߤ?-@UZ}HcÊcXf:(…hI\7iYKtTHΖDh,ȯ敨OYE>m2l<Vd?ێ`9'1llז9u>݌φ-gG:GXw- °焁0} {GY-ndʞ!\Xˠ,>*&,5:cE֦0##iЄaWD_%u}ni|Ó߲IZGo90q$oȇO`_O+XF^:%p:׻ v`ҎqQޣQ:2reXJqs~{^8n>ME !WCIp0aꛭ^:)n@^NZUvɛZ q=xpq׆ꭦlX4C!uuB7I3>Y̧Iy&9j:.c.N\agc+b$?&R37:v' yb6eꙥ#6|꾁N{4AKPĨP"?Ǖ*k))Q>}YI 5d Jjz˩?1JmOD03gt w i)pшQ>LO2.l2fOIg0EY?L΋i=YVV+DBίAN7`[DK/O*)fɌ bPh"l ̖3<1dx>6Ea7X,[_ury>|3NEd33Eʟ\Mjq Ρ,۲P^6YLd?A[kpob PYy +.u1x08 scVfjkVU\cUCfKЙX= /$yX{q>9͒QE{]i#ܗBV+akс3 @l^!w Z󑇞*'30L$XO;K9"1իG?w {Fz郀䃵Lr`V*ғ<~9l fhr31wg<<#]WKgrga~Vg b& WU ~YqH< Q(g]rTL))`.m<)[Z,Kz1}U4ʆCVx5TfW9r}$oA6AJ+18ge\EqPIz|bO8J2iU^"|̎Na8opa kY?FSvkn3ԩPꔺӌͫ]SռG,_ç]4D8ЗWudsBb2FҞC{zu\T d]4N u$bΞpFcP+g ;UA߁Þ[aQoY]ADy}4sl%\_|MgU{CH>Nu`ObQrxˍ̒5s-%TvRnmqcEygzWi= |=H ̦"juQLւ҆4Vz`TYRj邞Kmq&)P2JV +ϴ7 e%Go L}aR':ʽp ah/p(O)UTkk6շo']xYfd#ViX{H"[T{aFB*{*Ótl%'3Ĕ^gЯtk}cKS] ;aq[ǞxQDIOH-#(i!zve  e)ǻ3DmYfJ9Y?.+DO|0317 #S_)>Z?!XC36P!kA,bx (o:IZ%:,jrqjVVb X/e#F<)UR9 `]"Y E WD`@n䇶:GX8Ǭ#RՏhF]h۴.RCŐ$NEyQ~Ata+D/b 0ӃdɈQ4i`Y)ZK,;&}ɋz7GACrQģ}~18N$",FcNТ7?V*+{ǎQ+Qg#l&p`=>Fn+"s: 8R.5PCy[ʮǔ\)S:Ҥʺ1`o":XtmVi|F µ@( a+S ŀ[R,Pb.vfoQBaǾVtpU4Es@en5o^,$ΐ>ʲ)w9 ʂ s0<xTч ,bh$6AY6օE@&KG*q]sK+B_&7:F4f$/$Pk~Fk0 ~*{3}': w .=e#Z^eͅĶws?򮭷$yv} A^ HUE]%TSA5aO~kwKqwԩߏ<x4NOC ͨfX3;mz/|2H>s?пNOo1 ÿOE_xJCĵomɞޜ"W(^g\.1,qO]MNfjw=$ *rE{n(^_oxO֨տvJm΅82-DeltZ.b S*4jᢣqCw ^t984ˑ7RPƦsW3 auYs_F}pVw<9C%^H?V՘M5ۋ;&9%;r ^4QO&;M~\>Z&K 0QȦYºh0<2ɛ؏18o.jh7L LEjуFvCG1\첔V@rWǂդB5_ނ`~7OW<^ύ^/]8LSOL @`HdKg8VG΂:R Tk xH绱Nn wbatipw)4`bY)W0Li]i]oZnS,mu,RGBHt F$"r xSSXd XR };|~Zj5G%Ev^!vѐK:bqB00"jd1E99A&Wc7x8APɂN\IQ.0JtZ E)C},7[VBFVH!eɲ*k cc)K*:᱑ރǾyN9p&b(^bS <;z[ÖLZ_ZjhT|rJ"E/A\tӛO!M7)Xpsq 0ҷxxjcM bKg{5,8u8RqWx ~2Fh b}ެ"ѷ\L"KO<K/c)5h~5RP m%PNXڎ@'33Dž0$&Voej Z(agZT&Y*^ʽ,9%oNƘA*gư-w+ UPzT3yLkY -?!_Y{Rj:K3!A~ț_Y3u /CnByv ;h0*"3ݶ`Zܯ~,LaAZF7!l <[XX;5W#{hNKbU,+MߩǢkaB&Ƽgpbt,A$AP,e)l꡽lbhx>:SiGs/ O:‚-]pd)Sq$dͷncY[$xJm}2k[U*ٌ` vFθ*3dєɡؙu,R gLW5vyJMɗ~/zYlUUd@ gbvh4Pj/4nWp' !HzM֧tbQLS9j8pӑK.2i[N[D?W,8NO.],\}͕},8e閜ixOjO:'8,П rzп4!'6%Ig'IxKm(fB MgWE YY)v#Hm*OBN.Oe2ܖݟ p2YApl>a6\%G `c k%.8XzRx%ͯgS'e5xTnUB|ǂ8Jd 4 \Jc_Xl1%iQ&X:~׊+ɔ@)+]K Y2ٻw̘%y-KĬ'/,x4/wFON7.!9cH$܄=9x[~ya;pC6c.a(2jI}W28?mT*"ztGiy_sJ#C &EƠ?_mrތOwlKKec zx8~r}) ?F>ňf3Z-fnv`.@cstvJĘz-OJ$X:vǂ=~qdՊUqx^@R<{rT㥜 ίjCr:aCie:waX[͈VN*aHx_Y@w`ucyVU}֤Pv_< Z~vQ;f]R0e,tES^tQJWq\бhI8N*{9೉U %Bf#*L :·L E{ y?W'샳:{7@jc7yaLJ7}gx|qN[Q~T]G+mS+#p‚cԫmy̵0b] v5?l,8`e?vY@,Fw)yQgF1,J~O@ܦݐ&oxUuaGVV~1`/tn~UoKrBy=Y} ?B>]'ZՐK2a.aVB'TGJ1]s#<E;<2)ϵd?]**Eŕ";!5(/Ye<7Yߪݚs z#+HHH_$"z=M$E7+E5- o(>GX:w(_H $L} ⩁dʗ4ؘc-EVZJb3zuXʉ.Lx.藦s_[#ۻfBſPC=kUP08 @Z9bÄbӓ_/:JV|=enbqKS)JJrv,8N$ȷR 0گL!s[3z6[$|,,9{vn,|A%={6x \|Q+mȶ!`yUB@/xC|Bu-RbG3bSDRͥUϽuRVS|e*FDŽ+zLBYbJK/ey ;wnj$Ď{;t-*WNOor\lOBF>0LQqm]Þ7WE?/8*ڗDֳe2UBQ! Y*EĜҾJ{J0e&f<6O @R-WO)#8V"+|I sя򟐀C GN_׌>LBl %oûZk.Y6x>oLwg5e@ \V!<kQ7MP^fNeˣ"͸uf\57&~6L/7 >8:s,2\`~y0xًe,Yd8Pp0A'VG"ڸDDBTchJZq/N>g8[GCpr(*G0\ kq-66tx=HIO d!Evko?w7|3oU7]7Q|$žBb(o=<EZ " L 5]v8 K+Y஄ŶῨ + \Z/WTJeGTw\bUd8B9&GO" G`  m9_Fy/$':1g[*c4[<(&y_/&Q DPn/7<H\r >ZUvV{_zQWErь޸ę)FN \g/8$M`N8P+TDAW+-MʃjԱnlH23Qlwn()e+ucT.0K` ƙóp+8'70\`n$P!n.s8B4??<>{$Z‘R*89gq~ ,n.⾘W{=U eP]b!r`z, Q͍)TR_T7ZbTZ }fX{Hz= hhDev0rDg}sGܪ4nOš!:d| ^S'&C覆M8_ӜJU<Znz/T[xQZ)JqJJ9@K59I4ĊĤza$HN! RQR;^15:^ڻ PSG|ozwytkALw5X}9"u%ti_?=#ҕ<^Dk#YO`L뉫~i|-i%-0ZHxfad$Šdd.I}vDI^ObvyT 'X%| $͢ F2Kyq PVQZ*(A1`VYDE@n5V%qkcdcQGV"y VIJ]$Mn D!xy\NQ(LT"-oב_68_Md"FMk7h[1jJiˬATq ^sҰ)|RHV2,U{![3,Zyi77~ .+`ovwY{so?] 40N˔|q4]\8Jk{§;_q\xBuD`-N28uxrʸ8zn/YuYL0v`N伞fQO ej;59ʴPRVj`CU8b3 c"VUUĐj4U-STu(d,ͼ+YEi\յp 9## ;* xܐYe.ЁZ$Սw7˼3E_OҮ,=.ki'Z&;kf܆ 77'Ԋ65ghŽO`ua.h75mϊIv/e+VeE]ݴK;?J1s秫.]F4r ˉnjo%Y]^~<PA:v[$,- >ڛāӯlK:n~n]F|Äj׋`&QOaGM{C)BQ}zEM6>scd X#Tp~+w Jٶ=}(*竰LU 9Z)Sf:ճQzy.Ǩ1&}k &e_F_`=hFZ4zvA6vس<s4I&Q:^1\3twwEwwnT_L0S~x>\ ގC|Dե?Z+RTqGgfnU_n}%9Cū:^fd8ʯ|򃸘rYV~918Bhx0 >`[;,d,M2c.n;.-NBw(8^;"Bo`5,k4pڝܲU'=u.1,pںApY$ABJOJrS+mnGlnu,CWڷe,ˡa9e`+z؜r3͇wq$A``W Qn!yysZ]G)a6/Lrߦ^DC{۟lSK~%5t*oC1 O?@'uJ@.WnUHB@>[oYh &k'&o͚@dZ7o"b49?0ղ"*P-Ճ$D"B|U1$0;_ N>t^ 3bIw?a+vy;zM6>}1 -sצ"2y]k: `~eн BoϤc5?_sz}\H܉gof,s.~]B,H%fݿLq!Ǜt8E;6G'ypWBbº劊 l;P#FkkqbqH'<tQ@_B}IFU~84>+-OU͕-w5,b RFEef/!`Ӂ. Br*tZ*tIAln4?WgKhBL|Q~wv|*oB44JBy:'F:lÆz!)<4MgT ͳgYu6_@Qcf)(m)yDP⅔<]mrcAA9MmAKΫj3$mA`ۓFqhK~(Ò֤EB*W~KA|{KVbu!)O 'XT I AhzyR:n6vdDdudN1r\@ƭ ^ihB[Q(f8,0p$%T\沀VKэbiY{b` AHԳt|c)I9L³B8+>\ )^6jpj#N $UČpKmjŵ:#$ZՕ|H-ibTus{@YesZ r%bՔCF/wR{ cҳq4V| 1q*qcPj,O)΄:@Jdw8z( SXbǘd>ג:D-AO)T 8%o !%.^ FF =E3(,'!KU:ܧHPPERHI!KiU> 4c<^`L0Գ 2H |&^` FS*T Tx=WƦꐋh 7ԥVQ*C(uZOQC*{pIp)@cI(=JܖjS ^ES SӐbZ0IDx8 ,xkx8Z -J[ѴT{W^Agaz 'W4DN);zEN֍."Ku$zZ3l-5VvXeA@Qn-1Xjk(Jy/N<*SRLtר%uڈ}ZDAXa{=q}`:ҲK:y+H`[yJ [ƴF*F"HeA =moFC Ҿ{x\ I]ںȢKJNݢfIPe@jݙٙٙAH+gպ+f8f-BEqf:̜pƪD& A\2jºWj+zF?jA;tcf@l]Zq`tqES &6)V&Oo18rTX8fQ2E(8a3\"`HBdKɞiM`EJhSR],(&IhD2 *,2)<5-ɧ{}w&J2;aX G ͌X$\;iI܎|`C L-ؚcޗ[!'ڒsh; oqpЭ LؿpP2'Hld[hOPO^ ۼ[KHӖ mѱFW+!WT)pDüsPxo+.WWu QsnDƾ0[4}u)̟[Sz{kqxe ]giA |KC||jc`8cL⣢M?`otqR`9&d#Z. wo)?U$DO7Č8E #D8Ы؀LjQLBηMDKT]0!# /#d,r{gH}EF3I,#ERz-IdH%0Z%ஐQÍvXa΄XU'1x8 1nI*#dNG$!e4uBE<|(DGYmb*A>sK"91:iI*Bf[+ &[wA@X,bV֪@;5h& 09Q45ӓC7'ؑ{Dn`ZI:NqMhP(:Ds))BLhMVTN@5R"8]&k!ޘ_\o^.]4_>%`yGv^30 cpD9y(l4rIX( 3{Y#RU DdF`-&znCp >q%Gܠ7h+nШOܠ1Cpk-7Вš)%fT3;`܎/6FM4׻sI4"47lH4T ^gr }M̭ 7 W,(-_7pz B~qS>|DnLu &"*=U4'x}E~ě cY/m+ۭ wLTQJeHЂFSޑ` CƐ&J& _P _PtPI)BmO1ͤSE(_M|SrtABM- 7?={Gዙ˵!ETth#UF`G#I׷v+:qm;rX$2hS!Fbƌ 1PB} W>=R>zR>\>Y\ c#t iW{0V׭h0]l6W˾P":c*5!P_{WÚ:B#.N߹h$]U7SbNw.y઎&ЪaG] b}r;Nq\k:-zuTרGLFxpy5|q I>}Ur,)`V1FQvC⏮}e:rkS0]VtM;/YSy7wl/RY z.[,7dek0aqjo92xh7x6q٫ 'WGlqoϿ O%E}f4'ֺ8m>wWhnE~>LƊ q%4M'7+b|\f}憾0Z/lG1ˊIދL&;}uYuSYаL<'Ѽ O+׼i^~ڜՆE0o+\ +yjW_L~|w._Lq癦\{80r|*ҋ}ۜr`sHO7l,\j4Ej_9 Zvv;h:߯YSm_ PCLS(r+P0u#vd qO]km^E`ar`d\]>,(Ƴ*ʹHuӴk9(]9ӨdP:=nS: :=KFeҙ/kJ$smzww5| #i $Ǩ)6INd;sls4^N*4j څj~ڥ[B0e4Xuzq) u̡!IvYkSB_kXk5@jkMk Gk \iqk -r$'sE 6t56\|P ֟ 6@'W\P r]eEtJ *N:1 C&;T߶d53AN9dy9LR&]뎢o7$>RARZanMTu:+'W" R947x1\++O˟ J7یY%:\ yW\flW39}RcY,=źFa?_1Hدs g8(qsDsbuY8:!G*?pa~΋*vx;l2(U8k5G ߄~eIc{JR 0`׋)-J ҫ/iFԢcƷ*@o~()5sKɪ_r00Q KQX ;*3yyF/v|Vggbe"]͹Sh}1&GE6$ݓEAXaY,F"EՐ^/VZbwYVI@-f/ȻbTMI{5”gKKU߽y&6|ݔkDT*dvJ]PgvI3;U1{G2;Fg,r߭&F͓19C]%9Yd(=5JpVdYB\yWۿÕr /}2Ҳrs bʻeni:GEE5ů$FBpfZcuJ!~?kb0*PmjK\Xy'm,ڐ8+K\=KAP֡~ݣʎ_싏ө-!Wb2 K :_َn$`՚|i*9 ]fX8ϮI^QvV2:oG;7;'hr=sF`86osAǓL6<|j3l|2-tI<8}UE~G77tW] fkܽ0jV%=.B͹ ^xgVD-?عS(>1`-'Z`6Im<y_&kUV7Wq?lzn V*ޔ1$.(ooM:/NVoRE6˗ xzX_&>>&C;D` ud2(S m%&Ä82lk; R$][M2>Ǟ-\,:A!4YE#uXCt<"hd|7jB1#:̒Ø-'a$Na_A*/ ]RZ_ڧ-}O$|:>$hKQMK ?YY$\azQq bDTrɖq+U>h/ ֊>`nn"Xr5uxJpYGG1ة(O$ķyӿnj@Y/;ىǢڭ|zץr.kRtfo6ރt9ΧM|!fy]wQO~|~٘JKŬ+0>OK7cVn(PtIAkXf\I&WU~[D^m=sEKh(5 ƒ/#8RbvG8_& ,D yu$UƧ58~l^U%e~*^>doʗi^$G.e6-Qgfr{cwsel1ݛAab7zY1ɺ?*3|k7; R.a 2M0p'U-pK5!XKFGݰMlVFQ L5LW0>*v7.)  R)[^<D7@?aREjl d JdpDA( qXI%c#a}-YN'H_k7? Z#vW$M$h$N]} @;k`L"_BkގL q4B/='ʥkduNJoF S`m gᝎWm/'J%cg_0cj-"d‚ n) 1!cp銨u>^DUG$fO.lE3JdwLC"Θ"m|_!u-@\󅛕Y`.^ V!1, / K_;SeIZ 4d/`y}0cvH<Xl~ c_lכSd;GduG9"yEϋ+@LH$2Nd|P"ې}KBh N^H Nn*P}r;Y5Dxn"^3 5NrP}{M-mGvrb&?vԱ^yy'( VP(>A^`5I- 2r<܉C v;g|4|?9r'JL-UT%8顱g9G8,wA}xDC&i ^?\P6^yJ&ҕBBPH*%gx bA eP$bIN28C(eɩI![O6WD+ct>'$0Z .> ˙q.YIRJ% JYAk)'=4C=D(𑇈[5S^\Ċ^\ x̺fX Ehj>ó{̼o+sp#lo3{hTr=1j%jqpSK6Mݘ '_Ҽ0cؔ6Ʀ;KpIsC EE W$d1äE[kѡGr;}}Uք@lxkJiME\5Um]Tr[aڽU*oǏڈ`D@YDO(¡69{cptXr=Cs:jsШyk1]qk8r SR^$[ΉH̊$i:-hNKXM,Ao#j`zs=}s$$.Wd%8 &Hl\aF%=4?n |,˭屄Ƴ\o"[}WK#=4,>冺@Ty5‚VkmC3?nYQ>5T5mlf¸̈upymb ]X#/n?Y18qƐ%%g%:S%{Gcr qcYn^!5O_@T<)ss⃤18RiZ"=4"DzOw&67 - fO\iޢ90|Hy u_w5 |,t:nɐChqbM$29OoYΨ/mew]u>Nۡ<=5W. 鰉u"'V{JLXblt#kq5d3]ɳ$tZ"_HujdOl8 wIU-7rý <_-1 rJSKTF/rHh"W.XIK[b=ի21 XSSxr67/D2M՜;1\i8=4 n |$%g,ߑyPO ڊ;9aX}_=Y>6FᠾC+gIUZqr„3Y,R3留-ӀJDNM҂U c"YDKi4jMzhGAeJ;Ak'5Ư4/J[/OB%+ (7]qe0Z3D}--8,Ge /؃7xC'|+sM" PѽY6WU͌iS\߬3kC\~jV:-gojb_W P4@JMCE+'vޜ3h3GWsFePki)??8p0c46a8ɒ*C&gA4_l2znVD$M6V3$PYPo{w3uf3:@"+O֏y@.&Oל>0ezKR) ~a)4%aL=iQ |w93vLmN6Ц{M6Xn1T+,6C r54SCp``V1Øﺵo(|eTt7w\uZpwF3R?Qs+’cBrDU\3ŎƁxr X\պ 1ы?ކ˜rNF{LU)˨w/q*&ls&L KY',YoHeqwmӇw,sƁ> ~^%yxM;M 8K 2IvvGwv |,˝B%߬v<|)$,٨O?8{f5Gf!OQ/[6r3QM(<uˇϷh@ Ij"Yӏ+cIJ.=͋h] |$˕'6 UWW5 ᠹ)4yJ@| %2yBp͉}mk֗ ܜ05tjE]vRC#`)# Q2HetE1usP#| 5O/:Ģ5[/ڈ=-Gm(h*|{vYA[#sFGxx ]^]2Bmppj7{i.%0 EA嶫'5ʙWNQ h 'nmi-f_9*kFEWƩ >oe@3*F)Ow=<<¶ zK=qj]k}VflR4qY@iN?``1)gzQ9R֜Фח(_$c%$bsa8h*q\„F' %]2TX"B{045T(LFTɧO(ܝK<»ûV_qU3MuMLPa_Ai]U &A}om95wR9EdPZ[Kk-nV^;+ ]]&fqcC#/RߚWיP Ƅfҵ?ǚU9߱ U$Y Nd1+LШ|'pn !^+B0mqAutrz`qssC3mTδl--^zb.:U3|N4Z[ӈwx1Xnq;`|̜UП Z[D1Z.*v9(qvɽR|4^QLO{bdJ<ʧSEnUD˸׍29ijs̜--NzDQXnqN/"&zsiLV݂_BK~NҾ+Tn^ZGp #^+F0nq#IA%z-^%/JP1T[ڻuRVQzK_=ۉC! 8ZӔwad3\hP,xMXo,Bw${nq V7YXxxt⚎ll h99xXPNC{Gp "^+E0nq#̌'>%IC( +2!Qƭ+#DP8J p _ExBʬ`[Gg4yV}MSLh%j(FR?VVfvc7ϋo+}zD+KhQ}o\قm# AirI7]N?Ͳ.ͫu ;qqw)2r7`)n?tsgqwE qK~߮fDz( >Uv[ne˩\y~7lr缼yh{?6jnʛs[7f[X@AE_9z P~9ňv1si1WRk1;}N^ɝ{>O p/cvYrNAm|>n͌16foyD['L)_\!,*0`뫣܉ QYZIf.3&2[8[VlૉQ=NWN9`h!<c>vEeSvpf|~cVa>A宙sYlw`>-z#&SB,K+VQh{ t{Gr1>qk|CEҊFYe=9 (P{j6E"=4Dz M&ˀv3 n҅ y&=4(q,wOA,i od)rf*UuUdXj1gk1Ͳ/lVZ-:F{ $CjK6:KJT:KXjbb AIG|/|,-o(j@ dmSP1t!Lub)S%eGmmr.ŴernANݍCN0=AS0MP,YUgr`{7#e>5YI8mѲr( J'T/0*,ISuFՌhÂ"Rq||׽)PcYl9<~(+F;Mw9i_֛7jQѯGN;^NP&dbfĸ>1+gnF帙p9/4$*XU`- / fkķ8W{ N {9C.^.Hu{m{luL߯(ْlKvݢ`c.UQUdUj7A@_bM*xl{r!E`Ͷ7{4wCRPbtʪ!&j&xӫ\n?/wzhL$&Qw뀼=TGx~)x,%)}zz9'YצDjT9J gK?:p\$ &RB\u$c[zK>Gh/ddR@6b322XyY=Rث2( ~Ȕ8Υd&sXI"%"3 ,o뭴X8 _cc{KѾtߊ?Lʖ2~` lK›}RYlgyA}/CE1+v~Kׯ7`5Cme~G0)Swof<є(Q>>Iʙ1 s:.07mGԟ֭va r5/د?-Eaf TT~|lHdžI0׏ٽO-oBs)~ZŦcixn}fqK&\LVݼ̊ڣNO7ۖkY.:Xˢo|ʛ5Qc̵'Iq|~2_flVaX|V`~r]/csocDyY+#K-a%hX$6e*.,7`Ie~r0W5=<ms_f0 mE,g엝Ԝ=m22u7w˽:Gzb67S7OKMi +a|Y7toyY#M6j|~)lOIf/J}oaiGɓmOrK5[x`^ZdzY߮E_[61#=O0v>&yqaW",O; fOn1i%l ,2leɭ"m^m _TS,NB0~[|7/pt`(kg?8gDٙG?;~'c١trSXm>ނ1/[>#`f_ p Fkewl,o 9lI}XcVx >1o׻{MJīs!")љ?[)4+u(G;e~!5 w X`9&1ƄtlŒ#CU Gy$]e뫥1l>dATܝ^3 u2LE5=dzr)5X>- R@,Qb&&0h%If4'ڢ۵EQ2YE]-:Kl5~YvSXpwdB?Ƅ>.$^7{']BN eul)Q!4T7{( qR\ &]ѮR/fG,$s%s8Mq@NJL=4esǰLĝ[n#/ sN~)fn6gcKȕ)otBn&e$3M(5[>4!FW5E|3NDݤ.d-n<~ߎsP(LiEqIbf)F|db;? Qq*|'X;ܵ*ْSLVlw_C2J8ۭ65Pzh|;g~7dA*X<"Fe6'>6cBHCp-2 լ]$ Cx-`1yO2)%cmP{y;8#M 3U2~J"Kr֛ҕ6PMJIDc=Gx;PO/ocjY=7ʿ$) ؅p^J<02 AC,Ms#KɔB9̈́9>966˧usI୅)֫yu|C"HÐSoMMu:$u!a;+9гƿ߾s>gW?啾l ;S<Ffqu!i) I,#2.{4|nlfw*mh<Ŏ$w )?jSʌ1Yr.W5)6Ց[/"rN( 31.o^J^>#]C6GeJ29rlyok4=4FfιJ#o`{ѯRE%o+Q$v`6miu9sbq,% MƗ_ %3-񸜠粭󨖵-#s@$P^Wzb{h~ ?Atjn8Qy`'(&bcaƘ\`ѻR1[EK3(}VsE$o *ȼ;WKǻ򴣺8y*۪|I{!MåJb_,ϵOwTD%XZ4ûr+d.gJ Jm7#؁iZQ$e42Q£2AnoLE.{SR{oήR`ЌZ"?=ӭY?ܢee8'g-_7'XsMf39j??n]q岊sz<o>|Xoo~(O|MffIu5D[]2RΈ +ՙ"M,K|^)6[a֒CI!}JZ8|li]XUWIfaJ/+\$ƌ4 O葝/}(N r0FcB-=LS&8qGVgk ג0a\C sjz6DQOڢ%ZAo%WKh5kHTDRRԨUݞ>5RA[TV;v;ۊ C OlA${a+)\mVZP}ҽpBB]$$ ZH() Io4oԹ"5$\LLPpp2aZ0E:P^(oirնK,i~|Q.0kH`0Һ!>/d+, ZX(^a5ËR%q xX <1}malDז:Y`PlETݜ=ι-0Tj١O{ҭ\ÒY㮹4>$&NEDdsöB%[n=18\w?;Bn0_Zp T9s$JpBB,'W- &>'a\!W&1Jz^Xлu׬3 pq**qxR9CtRq)%I-lrI*tǩq8NiTƦ][ [a0ײCɮE*KP',q2I2 L,Y˥̄6m`$X8Bvt-F7l zAk;`άq )d2oS3ɇԌ* WġQ|D1+HAnM$"`':!)5 ZE9eܻS|4RF*<(2@BM')ioX=AK*\=k.)Aed - r~-[5LZK=EEYvJS'_a9(f˭VZߒڀS-TƘ͛|R͔лzMqm 0L9+,GPJ)y0Ҕ0sfz7{] ^W*U!%j6T.+ՃX=`trא(I(TQC_s/XA/=HR܀A$̖SϼoD4oVлzMQL 0V0#Q2*f_1^ K Nh%7#pY$mU01kYY<)ITI#7Z| u*JR)j @ǒ9L,R< 1Rx&gӔ(c:G9c+4Y+*]>^.jב68{=ko6E0[l8a`pd"~A@ݱ3_fTl2aK%YEVY*tŒQfNJ\X^XRw>zǒՙ9drMPßOZ<3 효&[w/:yD.xfn01J$X`P>Yb;ݟmqnc׿с!(c)Wd[;{kQ$쭅,trO>B/sfD[̈I5bE2ڨԗ~U|Qq7UүφZqwwc=HXR(!0C41%g,'jch Mo}.8m7,e~j(]d4aJ㨊3;׳Oo?J^!֙Lz92*uGK:y*|'5x:͚̂)Mm}*k{5m]j_nW* ){7tOA {I1E, A(cXp)DJeLɉ,y)9ɉZ%sS%f%]_Yug",tޯ5$1Y5 &.{PÏoFN8܅` uhKlwuP\=Rvޯa,;Ш|hLCsl&t`M0 [UZdT19P٣ӛU'D-ZY6q  t~s} F ȍQ]C]Bk&afW{?Ush|w'LYMbQi7.p!ʂfxvm^-pDN؇cX=壆MWlݐrEs-,Ap OAܠ k4Al8д]g߮oN@Գgܨ֎%*@mPd_7 5*/=s`&3ߧ^3LG߱Nz rHޭL`a{NpLgi}-Nb+۰9MC`s 4am+0{\/ЪO[/;@$Sﳫm.-='ns d_ <ɑRk Z$ ~F8^̝nEaݒAб ͽPϚ>ObRFb3CXU0}C0tI G1-vb#/,R;EbY.}* ;^'<-Ttֹ"S 95 asPY]娲ʫSTUNx4=^Z<y_"BɌ\x+sErtN)!?n&Dlv,i6M-! &6gΉp\zY"zZXA=u>qOb:ݚ9=A}aZڣ&$.Ӯ-⴫CTZKUTr]VMw10{So?|F8Bf _)=ZZMCgH;1|K/HPpJ N]81S-:->ё% e|1µP0XTl)Ӆzj-ۡX7ɼ5#4LBJ7aZ绫/@{눡^ s?~:(Nt67EOkUV׷SjU= Ȍ>:C}͎z'SZDZ\reJ,dLF"cAQl-]A9Wh7ϖۘ&zq1gdy, ;|gQ7;f<I2VƦ6)3p".qL8LGsYRUiq]&򼊊Q/f}DH.+YAU3Zf題w0uji@n WlW*Y:fBhLg7C5_OfI9{ߗlbM&LpǗn5,Ye=k1׶95}z qA9ϑ$Pfk˞ƴݹ{|eO)N_ >LXMC \v' )#E2nCg(/@VJ +s fLH^2RBX²)QY7m]Nzw7E³\+-E[&fHKE; P3,LpGZ.ilt'NOaQ݈N }:|LjÃ>i'Bq*c%n. Y_ ؎hS;qΖ ʣv?'("oΧ}_!A'LkB sa;1dWoEnX;`WUjȉr\'(?1AVKKP뢓@>ql2Swl]/u%>O @' )i}!HsHaDm5iҾԇp88ǙH_Z3@La2nW~o[~TO`l!-L玸Np dqn_@Yhud'knW N8_۰ΊgFgg:*v=l,h V+ gP}Ц-IJUbФcEl6~frh|2967_I@L0raMs&8?x\kQZڠ|#ONakAR;"E$&}mr\Ԉ ;?3U |%0+!,{Apsq*Zs-9c9~5a똼VIVg3n2$&un`2rJg%!k)S{vjms戂^gtTG܅TybEaI\"n߅\83q$qsđqO( ulUM QSd73Fn>ٽA3WN$9V8Gqc)9U*9(03Ir[J/f1qfMϯZ?/A ֭>󔈋h3^EQ/7]$g&+KLj9]{Omqoo}k7+[QL5ֿZm)3p;K  P_L#mO6,*eM vO2 <+ n3؟W,][o[G+/_ dg;}8H/1pP짪e' [eE"]Lg]$-iΝGcp?ŪxeQ鐥NqcevU N"'R1@ˆԛs9 sN0vhGcpI"u~A?tW=Bd]ZXl:$°6nW]h N>9#ѭ(AeUK39\!gF)89Pin[~Gi"@h }֖0{x4()GvQ$Attk4o<4^hhSZyăټPm' Gcp*ïqp"SPa}b4Uc4iP>7ã584(8 9ScpL)ύ.%*>AE57FbbN Uߏ(Gcp.ד,( lã1870ɻ>;nL` D[\ 2hܤl 7Γ=<W6)Xi0z*K㈕H^h N#RL bE A/.JVtf:pQ2]!Pi LQX!m1U3X+Ae,c @RUXuIWhgջ,NIS=-&7W=,#P¤  -!,\oxǜViׄKje SU.i_H,hh N|#,upư㷘.ӫ)6ƁW|@Kz}5  좖1؀ã-8"}]h ipǣ^g1T,hs4޴h|Ǖћ[Gcp|?*hpq|"x|"C|<UheO c &8v/1Bd^-G;p{s6\.gHVl!$!)ﲕfdAV8&\ 7wvv>s 2?/p}mOn_ۉja g v #{1qR|z;24ew0ٷoҚ@/~mlg.l~n(,/CFPK(OPnE%/e8~8˛\剺oO7qi9κ煉Ύ`Nϗ\s .Y^Kn+>_HS Xf3;/y?Bb*&O/y\%sxEg#n'Lb5ޝm9$LGn5kn>@쿧oRXN!H5~cc/ bqs,\kn|3X<sNHVǏPq2^.v6ş7/ߏYg\9#G:j+͖s(HE{v+g?L`O8@g1;[mގ*\lqko/Qp}z-6Gg*$ޝ7UӮ;?u-Qe_;XGݷ3!n|Ǘ˳d}~K0s)Oc4Meu$yuBuOH!N7ξf!zqQX1xz]$50~u~iμyz"yS$)/oaL =i|{1,݃oΖtsǎoc-u`P1x8$|A (T^b*p33z]o64x   5yK(^${.Gx8iI3:z-e-,~~ؽ60Ͼ2f0ޣ).ee-$]y-B -PZr' JT@QLXI:`P!J z*1Ŵc B7 pp5CPCx٧ê( ߯Jjҕ۪mUt~=sۨSiR=0 (re _BjF% _@l2?+| euIGi9tMbR=vعJ(jI,WUҎp._=G8JWm:I vWMIU] -(v@ -PC ԶJJ=/|  _Pot f㢱jI 90ǥB {#G8uY4^M%)@(cOWOFWn6G% C`iyRRyK(T^1\]å B ݲ9 ƓF^x(bVRo"LP!<,VGF" _BV VSǑM&! / _@ګ}I//PEx% /۴HEaJXl80K(^W KJ"B ᵡ' _BFvԏ;?Ew\Cg8q9fFa]4"B9!T]޳| GDĤp !6pM 1A)FQ),iFrfg4YP<}=r8a;a8/Pogz+Z@I2.5 \}$yC|@|x5ץsGqnS`kGt?`P`Y` _BV?u\@B1aط)NH P!qn_@(Ta1:čȺ\H8J]*PCxNd'fc-hqjS(9r ˡ U7v Y+r?U;t}<+4wmHLKJ}JٻqU݊7UI906w)KR1x8CI۩đ8ni/۱uF6Ww6q6kI<`H 0IA!)}@HS%8"4Ʊ8VJXi+?3eʹ_?I2|d ,?r$ߣy,Q=Ѐ?cDk#SeȣL{>A_`(̇v&}?)qA ?Σ[|t땏nѭW qokjrŨʕNhfrKF /=7e#27" TSph.1sٍ^Ӆ1ܼ6ԍʽ,DYMo>\(@I6>l4@^޸#_|dqBq!DJtt|^ʗk|ƫjƪKcXr9w]jg.ƘĨLV䈰7m7mpKdci|3|4fW#5RrD3ih&uQ,EE{*7 E32JHGmhP#hx}xLjư**AMNL?f<+̅X }b%5"Oc 41M]%&Ozc;>&ľaoro  mZ=XrI.& 3qa.~G) # /QG!-< 92I4.+R-eRA*wh}COyt} c)3}ߔ΁,o#:y) -xxTaMgGѝ/\R/'탛Fo߃1ѾA^y7,@'>9)˗N^۸`fo(5׶V={ֳΡwbvg~:vKZ&T#Mx=joOĮ[X}Aɫ/khԱrܶHɨU^eY`qmgu-ml ŌՊ:̸VƊ -Df xݺT( p8ao$l=@˥;crgg!7]=I9/'uE^~:Ձ-V\*Fc7J‡OGFd8laF(߶>MRվ_xuՆh8B3 l]A 7}?g[GwywZ'i`i=8OkWlyYR!WJfm8 󊖬2 n2oAa-6$fVg\`u$T~Jq|΃X=|ưۻ8x1P0|K0k1=#V $?ZhpW7j$#!0޺HpnFɠ3=3k[JjR rqV+3F˯F73d,f""+yTh<.[~m(r ]X` :2™"1T s#YjY2!)e .6DQ\[Zֺi$ BR#E mgub-mlLŌJ≠V͒<,o3]ejZM36w&Ԩe·40>/`4\+dV` |$GԿ7ڭ'',`AMjh{Og97:pj7ZqY|߻Q2G58uD7BmaF(߶>-`)нK.ś= CV+3~뼢^J+*d_s ce\ѦZǻb[YmVpN{3]8.=XEc0FdlKf"njϬNK L W[QgbeIw8lE$"p"Z~8Y bi'prk!<ƨo(qklOO~l}/R7:ru^ђSΈ٤ Mؑ\0ܺmrF?th(u<y$/bi@c3pU?L(G5` Ad:0iNtjRЩi}PCݨӶVhg^BI dIoL(H2Ud.eg!Mg։VV򑻳? i}Az2u el9yJmxzj]{MirW79ntޚH ֻܺ9b#!iw4e7شM4`\̝n$b'~;01Ɩ˞d[l=,ْږ-EVUI9Xjop=2({hj΄S#ร꧂ K,R5M![[rLXĒktפM#ndS&x.T22R _ QZ(zGu~pkpt6EO^hԮ6l-泧4# ӬCôшصacDϻѦ5i$3Ar[_& o/;Ao/ġH9 ~{h1x*g.x9K-7[o,%jϨ8.&etB+= mA : y5/=_>$yGp>'V TuEK`qgE`[ bdP1oSў9QJqr~2wP> ˣcc9!n2جUY2X >,k#r8{A#pPKrG\ sdj(.P) AޠV<%?ϻJyqtb.p: +>/ Afwhe)ʃtPxmZR7QuEIB:q "2/eYLpЛŐw"}d9D#NhW7yٞoJ-At$X2=xUi`! *h @џyXw:zӐZ:ܟjT7O\-mwK;%u@L(d;&[/xaR(PC@J.J"eڶ2{hj΄S‰1hANl\dXf"ڢ]ԀІeˊ% ^ Y@ǔɅ%j£qT Jq%N(^rEȮ܏pMR #U/p]gie,z~߾쩉~FW:ų6eAB0A2KsKbT(/0Qx{1 z{!,_i:gbA-*<]H\oɿْZK˫q$NZhg >aAG#e$ITه$!%@ĊRR=".h)U ]; ~ D S"o*3τ\)ΝB#tPA5*ڇ'5~)aKD˷|zz:n:=@w͢p)k'Ƈ9y;YgqSA7_ͯ^oMh8yrr`G g a~FL˓qyGD?0cǓ߭ZAޔm#ax"ay`Θ*&\Z.qmظ$(`Wr%뢇|ی>iz݇s7nA7w:wxg>\g7StLj3j|`4^?4 Pplޛऎ~:o3휠ژo"-df-ӎfqgv%5tahT4}LC(74 R+)s؜{8xzT:V} -E+{9ba+7{MsPhort|p* h_ h h h h h 'j(Z(Z(Z(=88'p'p'p#p\';#p'p'p'p'?hhhڈhhhM 61˚|HnƑ)$US4^X7WCG czLhmFWZJxLmddqZs@IzaJ{1bhR y)5!mE7v7LXU " |;qC2X}՝b| fӲOsׅ% s9gk#Y1UŢqIG4CH:ӫQ|N* uFga"02b,X'UDs=B֠g*ى}=v >s2֭?cXG}ڤR4u'kBbBpBNFr+_yRikziCڔE _Czcn/5)J*1f`dŔ&d!#?J gt'.śpurG-ˆwSz">_6;\Eƕk?֛1}3Q##eá|p&ר)}nSLpƁs]@sQ> GSNM[ͲY%}d,Aʐw8 'xRjQtLwqSNcܜn!u+K@ jpu]fZQ.oWW|-$__C1y,4^_+w^]]~z V:s8ӓc zlP?]]v?lpOH2%1ZْT[:[ՌX̘|i3 1[p(֝|88<u'g'w V*ݭN\JYdf>S +T/:J_'|7C[XPC5į;/7_~w|]W߽=o_p.sOVua\o.k :p+}5M+hZ> :>E^m1zړ8[;xtǫAzs24{S:]TUAj-m4Y8W+\F<> ZeC*dRyk[)jC ].(Llʏ(Lv(=K=QB"賕 \̃еdH1@F +f6A2SD)>3N;[/bȵo{3O<w*7'NutnBGБ<^2n )K e|[~GCc^FTv2[2jӝ˚Fԝ[)A-A*Ў-@*#9K]ҸZNΧҖ(4^'¢"zPuV+vzz@fKHk;H׏[g%by-AU}46Nq4Tx#/@À=Dj3hأѵDey,-wyy^y==\~gÞ۠hCf9{GfqN6L3YTCSA7_ͯ^oM8v{TEy9㡰J8)U)& UJTI* R%A$H2 Y.U$;JTI* R%A$H U$?* R%A$H YUJTI* R%A$H)~k`,"&`/RrQ@h-ӶC@Vs& J,O efnuR6`"&Òd6yn6,[V|,dZ0'm :$M.,TKd eP+pD(zFvu~dkpl09TxKTO.s8 v'ϞPa$>~Q}Si]fin1j ?{WFln$Cn&3 p} GZNv2oQVv$;n7*)vˠbU ˭ṷ*H4i⼣8=6̓5 O%Wk\AWl)1:sǧzfC>qNݻ\`bfUS[r\GbR1 d UvCfNP3S{<_|lKyr5N) a"uYO^GS( ѹV׈t-%!X L+>B$1<ؤu(}m Ecub>ЖFٟ-vҮ@7Vc[֣ao:(PB=(~2htl5 )OvnjLPk4֬RRYâ,y>ۍu>jF-F(չss΃U!gYq?lm(e5D !)nVB`d?;-{`|xWv+'͐DIQBؐ*Fi6(w*cǧbzyzۻ)eFN& 3?.&eT5aOdQ\Q4ZA-<=::~~|7sِy@dSeϳ~xݳ\`bfUTţ*6=g~:&>rק>g|gèIj zg[ &R*etJ C4`}Vs?$F?zX\p#F1wory(!ZlumK=ik{R۳E┥90~0ˉsR+Ur(Ҳ2]@a'|+uGRمXP^1&d# UM - ^\k~%:jcٲ8硐RB}csmU)'NX6Vs!;3Q܂=|}c'^_W/ɺU%C_9  C-O\ab.;請ŕȩ8ULEݥO聰*DF=E6[mV{ōS 8;w䦊E;KluF3hݹXxrj e=T!31`BR>D'C0[ xN}.Vṷ*H4i⼣8=5ʗO%k)\AWV[oؠaf5HV4N8 )SU͹%tR7ۼN#6SJ/ÈQA3B5wO΍7=8A9flhᄇ6ZG+[Zm ym`>YMQMQMQAG t_V$)*jW)*)*>*qESTph,MQ)J"TMQA])*)*)*)5{vBDzֲ$%]}c܁^O~i`vFT/.&gtzZ'u'mvl%^b4}O!1݀umn̑kB?Vo]"e*sFK_L#2.0>vJ0t Ir.Իa !XrMР8t%5)t5Ys1"(/vvJ}4x"Rwy4D.n\[*09Pގ^2#+)P4`UvCf+کaShݜ#q LQ IG]ms֓l*=P s4IZ B cjsVފ@P,I.m& nXݹ5w'qj̞A[@{$C:l[ &BqSl][:Su3Z,Ͳ5kE9TV⹣yV{z,~vcOo|EQXD(՜֨|fBϲ~'ِ9RtPqm"pL7EbD!z_TN0֝ lVf˕ :wr1  LԘ5* 2n^v N"y= .0 ||*7Rfda13ppQ[~bRFU*h`@E5Cɓ$OyYy(S3kp>U31q4(Z*LC/2q2q˲l≼,[.#M´ij31Z&Ác< !J< 6ފYtG~гyS=ηܫ m=AN,8C 0^u`kXvՆhClq xtiWc4 w#}f=b>/\O SBHeAl$(0bSϰG*IqbB4 V<D@%)~JRD)IYB8w20ȉ7)*F٢3*4֐<GE6j+jZ1ŭSfxT bժU]ޙ]j+mHʃd'-}0Y' 0 wѧń"Rl/S#D6)Z~Hd~U]E bٍSa?_U}D>(R^ @WD`FV`[/$k2|4bc$IZiA|! i~1II/#V|W>>b+v`BTtioVdh 6@uFuY-W1)c!8:e>A<55еF-7ְZ;C87*:;#مIgk7Ys0 >S8PkG+(W]``LHXjģe1j$hD*R!e-Wr lahvip Y# )"id5(|J.uDOL }/bR s.R#c&ˉpPtv+Љ9d§}3lycPݹ}v~Pr%:zmQJ(UB#X*i"F^rXK8V$1 shfFD LS'Ew^;2_Mlȫ]oÿAzB; =@*A6οy?6h"4tM3L6x n x5A ˽j!gE ]>bĎΫ9īѧj}2FG &zhAIoN1-ORXA㕦KuQ-#& 뙱1$SI8'ɹE'꜂`f40*6:'%m]8gv`m.&7GXN?F>}JO>2xX\~|v8ό`O~Լ|3Vsa&@I!A9gK9SV͠] W}o)ze` ^~?`YU')IBVhTLyi׭S=?<&liX9/ 0 pG_l-<ͫhU˺]i68Jw̤F H Uo}/9TjXp7ۊ|sYO;_Wю1X׶fG<{Lsl+\kTg_?nrَ?X~u`(u9jעW/rWYQ~0;u:[1%ݨK}Ww1؅_'د l;q2Jp䵦VDCh ,F\?0>?s4iM}V_г@ў\I&DһN̺+" O{l^ "SIrjh8y0ӴV3UgZox8"o@~+xJ +pB^۹oa7ȡ:ܨH9s][*#`ɚ7s21 qd3'h `#X:LuK)H;@dz45j+mY9x{%u2l% q{ݹ]p #TnW ;o V٦<Ԁvk8}3jrw6G/'f>+!WX*׀nQD# ݙa}b:d:ӜRc}BIu*϶%žKnE!(ƛ8Xhʴ,%v_.Xx5KTΔ>\JH. IOeHNGBwYBcbf$">UkUz%ݣ[?'6x]^˚3&egHU~]cop:k S_LdR0!d^J%ށ 7Dqqv gguu^~8pB0{u9ŬaLjZO`Xd7|[M~FEN,w]-qV4Σ|UXG4M[@ 4o˺n<ңaO?ջݙa^puYTeSaׂ<{p8c5#'S qqM_l+~e5/l!K*rRvKN3EٴsϚuH3ʬ'].WJmV.|TWOf4'tI/UyoD܇LY-Ydj-L]KuQk1aM)MH8"@%΢QLqԙ1gWGQAi~F 6PuyM$QMI3Q+˴%In,V=BANDQ2ĈhLIā *cŝ+zsIp'gW7\Xp"LR|6\83 :4mKLqlQ9(` +G4qjJ$4:EatZ6l@ \{\K,qnTopLE| .x&:Ju͐?t"PR_ u =ϣ(P&?vX" YZŠ;?NX̪7"wOXP.9+  idSdzM:`SE$NF: IoL}];O1C\pfֺܴ؊p|%:ލۘiRVDiH*Pn(1jI-ᰯ 9T'm""0qDlzr.WE>]XϒA);h 4W(-𪐔1ŀwkGhd;։G9X1䇣"8d_0|AлI6\o4Au4oD_pH䤳EBBr ~OH6+S(W[HI)j@289(b1 UAgJI1g".%EQJRR6chH$=;뭤r/zeW":Sh 4TXf,Ce2X SRР8tvT߿ˏ&5{FW_0k2$|HQRH8x0$ ?Qͭ'58QMYN,4J#P3F41& :]JMgl $)+ɵ}"\`̒$[+-9d1U"%[CRlk  ݄P>b+v`BTtir1b0ncpZ%k1$QSk{U(?=Ys1AfE2g X( ` tcˍ5,j/΍JNHr8>Wg^X mB[GOA]8c~r}``LHXjģe1j$hD誙R\Ef%E\:"'&}\ńɾ B 1A)9JaM1xF̋Mg7wVU垨ip [r~G^T]6K<{^ߞ"IH"`XI{EeZZDZ"1ND03%_`:-"sdܑjjdC^zM (w=F;{l3go4Mh,nzwi~3$lw:[oۗ&&GڂFX i {@zJ ]2]ovVikz϶1O+?Ɔ1) JXDTk9( ǎ.yH$R*O'Dڍ&$0bZ :b,ƔcҔqi. e`PƄp©Iē\x=qh})GTbx)g%m]8gv7j]m'7eB' \NX#F W,$ P';[vQ` A@CI#[H"+Ӵ <,V%c26ZW$(D8P!HI`>f!,*AkS)wF2*ѫ\1S̄ЪhRR%\vP]1;AjYRFz \ C<"' 9$=\;);__Fj4rflQ\q}UыAZțѱ9IotմƂHe!}7-LqӅr_ibOMWfz9=zr􈔟z7׾p$958ťBqytΏNQ$nGa<Ƴ8~4<;gk8$EHc ȽC~,M7'IT.aJiAB<}9 OhHu{HgCY;2'A85:(XX-h8>'zYrQ%{"7q]:}Hx/T}9&Uz5"FhC]5{{?sۓᗓ??ҝw'yKV'I94Ϟ?7 ][Cx󡕃 E!BK0mZ5m?-lk#@J}z0?kvӵ՜&`}+4m[;>ifV1+F:ˍ q!6ŅO/w&I o9Jb[?z 0  +C5(%) &$p,9%vҮ#swxQ+KPcRbuɨ\ȃ1(޻lSFB&e6vZ9 +QdzыaO,j7?`ƮĴ>6lP1YDX48V erw5%[SE$/E#=;e!ɔ F} srؤGh'!fD1;=rDQ-H)E. gQXB)!2Njm:[|o{G5X^/nyZw:+{J8vhOH+Q!X~aY8K)M.m$&-Y((IKII*=:~FVӰ|@nʮ ,2ot9ȍBثrI8MjHglb1G귉ȍތylzwqe8nԹby֞zx*'+pbхO:]'_ ZI7BW Z TM_=XjɌ;Mi H.Oq¾e/[9A X_8SJEC.R %' `;:pZFZ|ªաe݋ft]⅁ec!=!-k-W[CZ$( N?b9JibN! NP@agQGϊ2&ѢP׫B]vV@wX砭1U@@}$ӥ EeGJ԰wVavm{0k>fl{m!mQ-a:[~(nFeh A=A=gn1НCTɴs[(0p'3z< ,?K> . 1 |)"Js1Cc}g Z8Qwƛq[n'K+`y"\]Mr zg#V]'SZ4CANhwO)~֖o4E14=]{Z!V"wze=ō[xgeFd '&c%2/=N>ƗB1w0J[@鎍^ps+rRiJ'lvn 78xQeu%ZsONz?h25{EgLFs"1##N q'r-o=BKnnn/K) 1 < Ds2䶥CQj!K͕\%< '2 p2snBgrZX ##v!-bMY 0ej|ơR#9R"/d!ߥ]KeSښ1ԪCT"/ZկQ8Tnh]|Iȭ)JoOYE 9b-(KqqInkf_wr4OP49 KH"QfDZ[LJJZY6*gI yї)L 39I֡6ȝhEqTQx C̴,`sJN%xI&<^Y ,xɈq-Ft,E~BkӹO,; ZPKrSt)4ҒDRA^@NTDs% ɉ"dX+]6@C'eɠLI>+2.:f(Fy&p]CpK&T)ןkNsm]La1x̝tP†u )w9'մ%0ud,ɌFEc2u;j3kkɉ+t͉^x[rȝs"?em-^xG5hkZs}0 ) Z!ޕfQ "2ǍcAd`FZܕlZlŦd\}%o;9((L%J*!\Y_BQ&  Ϣ^4gky砕4:Px6 %Ihm:+҉*Zk-5T{V#P,],/J=ଃx#.%(k;k܊pdE.K' 4Ȇ[o:e6+(v|7{oK.'ASąSQo9-sa"ү|;N(^ 74Zoc!0 |lu )GCl<|bnW۵/v94N|EgkWUh(rn2A7OI&p"jLpNJ鐔y:6뾀0J$K9_I6km:ùZpw6.΅1q=o^z}h6&zrEAlPz#n@ 賄y0S_*t_t2Ǽפ6Idĕ_Y$fj^ u[h{RwOjIM=YNܝ=dwļ(6grr%x@6fX`,,kVW::ܰ^^ N܁{Sc=K2 I[̎I2[V7" 2SMq~Vz|zԩk Fcbn07ϋu.>bĎs7?`EHUk&餴7mP1YDX4iJovI0M xQ@mH/Z &S9y"(Op|$2)r+jF#AтR> Rq&{ɱC-^[ ksGnlrUw*`C32;櫳(ƱCG{BGz=ttH$&_$ Zm դE#+%3V## >y Aw 9 Io+̔pG}OǴ3BK-,%6MVT eZo+&dQ{R$wJ5W365zo=G߽yhӫ/pCQ:+-gCm#0_ ] {Ԓ]SwR R)RR\}^Q~IEΔRA ߡ$l^ZHo\X:ա{Qlׁ+>\0еll;':5E; p#v><$О/洝t {uH-c-u*u^q4E/Ina_\<؉Se_]ɇ Eb$$߯g%!/.c]{fg~/-jkݖ#.ߓ{c>2S2YW׮o L<[ao3A=ÕCzѥu;_ TD"K*CWr% *-6-k^B~Dw Aĝ2K]c Bi=.\4P/$Y}3i>{Q%h8;:Jw iLP +Nh %Gε1Mo/nFv2򾾰w1:|?n3C8@ft{|2[gs;("Q '5[LI-bJedr2B )b^JAQɐDg4Tt˨.W^Q3 H!I|~CyŐ֯)];ƭu| :w?z҄U˯W?}zU@wHEC0 ter >?q{o߾W$ ܬ'fdo|:\ugK{̐W~V)ո>Q!xvqZ籕g_1u?jw=;|\{S}8_8quGZlu+W3yax,R5`_ I5]i2n+Q_m%ʶe[#DVl+Q&:&s6iMoۤ6iMoۤ6iE͡Q";9Uh:N %3ɜ"W:U) ,YI-Km*[͘msM$[mIh&Ed)!wK4rI( BƠN[;nilkvV""mdA Km|TDpvsHj9'hw(ɩ]8Y8*;1x!Y!<^e9glBdB]tI(/h#F 2$LlW49ȶն<"[^pkR#*kXVAh 4%5D΄ Y$Jsu"(U U08hLBaߢbX6Vq۵-U}K5%d<@DL`MRSɲɮ݅VxhZcU 2 KCSN\ ģbgChpGػmɝIE8. Ys(R˦daX! F0\nnURR3K< ""fNȭ6tLhjcb#}{lv1h(9k(Z|7dhZZUў٨Hϰ 4/x^Ӿ>iIVTJF |kլcոMY5nVۘU8GId/V:U1q=X'2QkgFRk.X:lϊQJ`doq,bWc27 c~`ts&0Kt!I=Xi8|4'67`o-y&q購~-wWyBK(hAZT]Ӣoņ\<}25_y8+(m L{ѦxPH YYp*jd.-@+=zARp2-RUgFR_7du%a76-K"o__~;c͛բJҢ0U`.W2.Tsa"twݮ)\OQAH+wTj1{_PmSQp;> c^=)ɟuײ/Y6zp1:[S{^^8[-_͵Bh/rc.MxHHYqkX **ZM\NO>^QݝB_W<D)tI7䴈^9.+nn 'VK5Ҥ`[m z0o#֭m( )u!p ,4IYSza푺Ԝcùo]gSU? r)Vck3M*!@>!1$neZ{/IފִMu%ЫŔAkF Y69DBlpBuh"kaEzLdg pJq `ڢfIbH(3B~nuA?0#G_/ijV5WμGGߥ9f>s|=zb{4d9Q;ܑ1PWK&v1؟gUҙ+fzKO'㹕~6s"NjQ}W{7Ŕ$RJŕRoK )b^h,|?NpF^NUNұza9qǩš9G_ iʝrwxg0>ʻjH&~?_@_?]~wϿ?{wpą;=u.;XP|m~YH]SޔtjҚ79ɄBW59%toճ6hp $aŗ~zB-VJlBljk-5W?j/gT/!02P%wHsOoqCI!LCƕoǬ2B1̥VJ8L׼6}"lyKAjPgFLA*K;mT"0B QV`*{٘L9Ktt]0g dJi7?xjARJ).Tѐ4ZEz޺]*peFP[4vȴ="\(N1F2IcTL3eD0dٙuPJ.6cgf7_F m{GB:XI/nRΨt{)ͱEGτW<:Jpܧ)&Hc-8$/ &d}fH_rHb yL"il߯z3|feqUTuӔڌ-#%f5#x[_\\:9ŏn~\p5Қ *,J_?>rেm H*=} K <yz[6Lrnim/dp4ܒI}8zٽlׅ^a:L=ZR*Ur+kvHLHK %XkH v:ʣWxpf[Ƣc\gwkug1tfY$XLjAuUͅu!]2$UHϼR*j|sW`Tk%ijkmW'ﴓK9![t@~K7Vv5<|4EE d ϥټ-oH0NLDL3ɬ$!D/d`3) 'R&T.3L6@HF(!kҙ4eD2䨊;/p5P0T&}m0RS-j"`F9Nx 'ƺtQ9 O~6EbK MB#1,(~" aNሉ%&9MrI+'* :]8)8n=GCa@}OEDoN:bW+ p*-tC87 )d4&jci UGp:f ڀN`m\N:e;d68Rj̘X"L:b3 輯AZfdձ7.+ZUri0L[FwAY YsqoE *A$%f{wjq8âa)tQ%jXF:@޴ :ܰ@Cp|qC) X{4_=|6Iݭyr`ˆ^FcD2Y+냉KM-a$EV 1q}6צܶ Dq!?øZY=U^O|< U%4*kc&PfTetۨQ1,%! j@ C%^ic|σ꽢gW}[xŴ3鶘αDs="_Fz;P~ $f>3 R92 /T{eSa` ?>y7fƽ8F6Yi! wVD>=f|si3Yt~TdU1ӌfJ>8E\dv9\x0,E*|Ly(}ef=0~ƽ~]a;py ʾ-ʌ/b_3ܰ??d3>_f~?ǔF=w&J}dF*|Dٜ2d=޹,Gr[aVX)W9jpi3G_hyc5GۼJ: &.e+Kq+p_]tQQfMVzύߟk׎ u0iW|`gFLӣL;1!^2걒-/62>uAi(䘞k[^E"闫z+1U[ p.X5@M\)Ms[j`6lC>]1dVo*o{ݽwB3Okn2zN'w^=n ĸOfO! {?@?1].A7|6ӠZ0zߛZzZ:FN])$EHڥ߆ei~O[s}: ?X!]#?\z] ^N{1GYB0|I竴!\*d*Ϯmӻa>kH) (2ux 8RUE%v܎+yJ37s %[X)mmYK)knַtVqyHt5=+Zf5-s*ZӝouTN7/q_~SInV6[zC%Բ#H-;RˎԲ#h:RˎԲ#H-;½Բ#H-;RˎԲ#H-C% LGt40 LGt40 LGt40]PvaZX醴˛i?%gQ8L0f 6i/9vf1|%gS4awbXKaǣ>8X\Cx1F: lp6HF'g)Oc.*5X@ `:#_[gf]^r wR RJ1kR$C;5"iPS~ I~>t!cFY hc*h%{gqoR0#4'(xAc_ynv?}fDN+NE<yÍn.NZ~“vh,hBXdm u2F(s#2v -/,;B^Fe&#QHYu2Lw2b=6MVHKDc4_{uPER7;g?4yn$WLGӅ([]FЍϵ Gsfٽy ]힦瞸~ӣ,ofm?ɝ]( /n&MV tp!3x:z:+,pNr GXDw:`= "!!hPPyQ`@ǸYiJJ0yaR}HBJAY& K˙eFicǙi8GtFF(G^:_| :7veuC)VKASˑ*(=fxIQ`5cZ#:mLs,O;=u8ELabHڀPHFa`9N%C4gA?$Dha!f).hQ$rŐ*"izXwjYW?^ utkǗŽ2PN)c@vlpibOiN(\S&UaϼY9MPWTӯ-!"8$1@,AƠ 8NH7g̮06;y#0B{4(o`u4JL2"XF 4G$UgN&V{3/ߖ#z!~\M#7GB".x0Mď]"DlEi5y͹(;mk.#}{,^5DU>հ-[ؘ4 TJv" x B'$P,ԣ9Xe%hGc+P k~IЇir #YȲ'ẂD~gټ +:SzVr[qkgC *cRb2Sw{a)@T;sȋ*G9zd)VseUNaD!w؆@RQh'(@E5 Ti,Yydtehp8%b"0g'(aHȮGv`'xbEx-5+v*^\b4kW6?ua|%ܹB$H9x [$XNbLjn#6stBh`bK"ȞX3 KbW*mߜ)D3Z|?3%L26&ȈR4cK26̈,]TsLTNըSG5dQ BaFtpړȽNSJp5@IE()cJ4 (Pʍ"VFudXDcJ)qƦQc֭%!:Ϧ258;At h v\l3zio`pha Bo2@7ayOIB{@zT?m~%W]f|kx }&$ae9. z.V||HZ̃VKo7_rt8tx2u[;+򎅌V/SJ:U,aC#Q,-~roçzj5ft(fQn!iA˦Z"%އ3YUmyHBbdZL#m3a V^ik2+kj8]o;FN!u:9U a|+KioD=' q 5Kh]{Qkm_LJFA\ uߨ~ŵ N- ı>ankVڬ^ˌS"O`(ӧ}jA[f?A Vy4,fZR&R7m~5VJ0^m?7~-`Hi'of+ő8c\1ԝ ,o^$) p-;#|6 }akF*1zqCKk8e|;{gM;Kΰs Cmmm_L:ieV9iqdlڕYU/j)|r^lZ\DԮU,sZKZ2˦ن.חIJó,.1kǵ,[ߢNY^7~u-kEn[6x_W,>xvnv&gf6ZG ݣjm;^" {PK _t r ў˜MLOl`4Og-@WN;O_ݼ]+;9;WmT-)Ǯ#{)3<-.Lo,-W#$\ / q57Zbߡ.% W {C)h00i#cտq/NFM!zVulMwEO̍9RX6}NoIM0Ɇl =wDX6.S,pfT9; .{h+'2ckIBtZ[R?{Oܶ_AR߇!d6RqRajt7$)R!)ۚ)xhP W6w~7hVU?E}h\t"#{ OxmhH솎#qN{[xc'kȼo cBhEiV⣵ &U㺹12|^*o|D|t;(`gƱ3ۙpf4ՄG_ϊ @Vr)'FIeFƃ2+Xel z:qh| Y(ߝBfuqrvx/xNR{0Κ,Ә1I24wh х/fx]qS27]бgHjȰz͵x[e.q;H`̙Ɂ ;ע:Vg+h[mf\r#+ jE|Vќ|4 e VL0'v2k)=">GV.dC8ȁA˾"˹fcf*(SK`&eZ`Bxd.Yl4  <6+10BB20P=g\HLd$*XRs0I̲̒.x%h9pi^sX+]6Lpw_/3/,n_f#\_SQ-M(ehq(+#Hﺛ_1g~"|"qgn+̷>9(0s o7|!"r ` I[^2c?f*`?ʟ7!;E)sx?mGHT~H6jOV "a]f㇋zo[ac!!KjQ=BFc\ T'> ;]bRD.?` Ituux\硂OMq#+nM)Ug>aFo|7 o60 %Xsk&xt}SGE@jE3Y!w'2w֝B#n۶!۔{`D'BF&NKDݎJ]ٶkjVvs/v[䑰1tfs_y"'bKEEbIvbGw7@o??~x@0(^P19 _O#V]m -5]>uCe]>r˾#+ڑ]ւ(|~4!J,KGYv:L\_o o9, U { x!!cy;}uklQ ?D2> Di.B9Yro5% \(Eq. K 9ΠVk`<2sKnAPFVNbE31bqdQK%QN';s;|[s 6tFkt`a~ƶ4Q*]JG,oK'3n[6:$55UC0 0#N[C&D{֩qf -vD,sJ#68lH9,N KRLo'm Wx3> x~3tUMsܗۼVchnm`u"uFz(_HeXĵ__&{ $uRJi2H.O `bXMڎ`n$շܝOС+vlNZSY%oo7>Ya u?v|2ڢ?I1Fp\$ɅnAڎ<'&/xOv؉dwܭ =ݛQxRoE_PVyn{]3`i5 a8ȣ]S;Xu.gi 'RZdv/m:g\eJL;ӭt\7>tcO?sOU 5Zft](W2UW "<;hr@sN+r@,^ 0r:Fz69c{I4Z7829-ԸCpX׶}>-.\ee{L$<Z6iPW//D 0үhѵv]=`0`GGk P;i__ϼA0:3hp zyEV)qj(dc?c}U[N@lϢsaԲ+=h{|8R"-{YP$} fiIz%A8ċCOegk9="I ;yJ+Ʒ%F!@85x4_D`-^'0p xl͡۵Yܤh#rBA.>CIN39=8AjzOl3']׸EPκNe( ͺk45}{P.: B9}Ԓ3c'/w鋘wLC _@+WQGBu2OE??:xOqHw-:(fl6ƠrپPD,f DPNXbxE[Z0Ly b2&1-{jaHժڈxI#_lttWߠH)h8J!+viv^C4FAPsxy͵ָ^kY}hlD][)CC4cBb$ B>%:=vF,y=񇠣kzuw\C+ .BA>딥$$L1e 5GD7 j]q K7ʹ ^Mcg=ˌMA~}(x|j8uōK&˒yUkiWm7\jb:!U,ssU6Njʈ1тFQc吰H#}tL!,t^EQwA+x|BYZc|mor1 >mEq! O5Zi~9piyd=#kn"y Se |s|%˩1s#HkځM8%\aaμacI| ߹l^m3q/%k`ZvٹL`v}lw?sL>÷9?|ĺ1|9mf))COlbj!CWj'}@R6G~Ev!OW^|rg$6BkeE:eOzMW7ޏ:.Lw J}w@PwY5?Ip7}XYgV RUT$U#~l_=0bjŢF$*_+DZyر!el*Քu dj}UܭxYEC5YQ!v8 6'2%'š\`.0*?r;ڣh6r9Ȅl^ Sc)Nő1Z,b㲹P"5q3?p#s=rU4yt>Yr.`Y.fw㼃r,KsI(rV)bׄ>d%-@+A҇^3cR[24asyfZ;2+4{yb2 NtAgAٜ/3PS^U`RIDb4E`G&s3q-l̟txK$Ld$Ӊ\8eսpK! O/&E튙ҋb_x`baͥY\S N>|K~v:ss JOMohs`N>'; @?zg~?K3? Z{w+HyG 6< }qa%BX$Ú#J6&e\=:TBν{2^tJowYZ{r +e7*(;̭Xy-rʝљcqYaFhNwT;(~7>n՞eA&z]}shźyR:uep_ȑXGkt$,uL`Uų>+]>į8B4?�&-[,+&K}s+R)` _ϣ| )Њ/SA#GsXx\9ŅviM%~Yp7) KDoavr^}sUol[xj`A'f`C5 {gKS +|4[ 0v uQ)Jޕ/r_ rA-M|gq?K}|$o|A&’%Zu//ٻ޶r$WyGb; eyb@2,3=dIv|%f!X㜺V˫oHWK/a{=oLe1;)Ĵ;u7U~|Oq#y;+zw|ys!Ōr7wϧj;do[~D+d]yed̷k~}هmĺs,NQ4f֋mvfmn`+۔k?͚wś hDRV#2ߧ J2j[޻(w6`H>I{g m~LSoWE.ߊK(Y ́Anr"hkK� J&xcr=}N.I_!9 Zr2ʸRVjKT6csʖFiܤae?_RR(O= J z/5d^|f+}Z᜔!p<YLqL5$LK㝃+BΌeJY kƬmntbUkN@t$ErQXm=A#-aH},[uHJ-†N֩*4) kR!TT2BSnz`UF\wCu=[˯/B,=A#Z#&@ Շr}yܽeᝏ,;lC V%C(-?W'1X Yd*NTɪ9ꌑs p-^Tkj)*!;io g< Mvc%H ۉ2Mї`8GZGWHV #sPK\0=61RJdM`!)A`6LpRٔ@@"Ӭ'iY3kO"%CDv95"frYX1 +]BV= ^]..9Ɋa I%CLx1UszC6"eOa%'b׬B9+Cۊ*eqx(Hd[56T/2 v8ӆoU4',qYFZ7CZjّL9b .!0X0bx0IZdÜkJ 3PbX.WdKJ]!(֨uw=dZsUTIXI  l4.h`'IR{H"z]B HM2r3x/2!d Bg @B!dA6!JVȰ,\UoMI9<@Ya᳈#AMCD` bW S!Za򔭋` b^XKY Ⱦ"3P~fJܛgT3!(.Hq‚u, `Hl N;B6+veRsφ.5u0"V_6y5!ix^*>ѾթB/%i y"(V=ȲZHowiXsZM08d3u&1j˕tiZLKEhJI5Bq;I "DDCea; U&^5ldnϨ-i^wFW^.D> jPx"#.̭* Gsc0 7AyE n^X@ va-"4DbLLȼeNOJL%5ZH\tP_R˨ˀ8 V"YԏD?gJ(̶Mff/+L0ɰyaU0>n ǴwyI."K_#7K*Z6Fx4a= REWnD% .B @1Bդ5f2HaaJɮY$[.F0ZN ;f]xl;',5MD0XTKh 8^ AZiEȜ,L\v"(N 0#IJ ?g@Ao,Y28\JC0$1dD 5Zsf6?aev,Y5V̌c޲PRʙ4[q/DPX@@]n$B L`_5۴ײe LC&lZj&4/:hkOHU9GUFs7(Ghz~5L@]aF 3‚D;)8w׌LԃX&R␐ B<^r‰okdشhl *dO3+ IJEr\ \D w-zǰežbjO(j1Ԕ FFSua泞u=ErO@s6")΄u x193Ekgfxn4H5"E Hlg']&0-L2k듵O:E'5kD[5Wj%=x3ʡ,!w*>@6`…ʃ#,Π&0E zQBb4S"p#&D9a8Av=ΥXXg7kĪTiU.-{N옄C@F%f|$!8@te,Xtޔ#LR{a!wu)Xo P GVOuۋ뽦6' 0QMykϓ A-g6uN'ɩ O?%r҆*W{u?nc}]gq on\:]C;L> eBBȓ'8IC|r49aN hE?okv)_@rj{u W?n]15 TZЗ4MO7^΀7HK)~o@X{(΀P<;-F,o5IK\oG/>^cv$k)kbfϔ$x϶߬>~{!Rc9Ks}v]ղYƋo[_I\䟡u^&BqlmJg/޾ ?.icN.S:l;@o_1kܶ6-?]p:|7gmtM+~֮(/)MU5y +@r>%n9/T\[ lF6=72.b\vXo}ĺmBm{T$ՉX "s?|[c/.\Og{~{gf.T1զbSN$̊=aeE£Yx4 f,|=XHuNBRM, 2*˪J 4QbVQ P/_?{kcOYTgoSdL*{QgR_]V~MlW|n2ŻC/(6ZEjJE%\2p#V-}}!{j=]b@}B~f"qjXrR!ȁwmI_!&CNn${Aw1~J!MR~[$%JU6`YtOwUwUui +EFS5S+{'4WI?r² ed)D1kyj3++`;,D(,¢a,fFf0 v+GącW=Q`Ӻ1Nnf=LOE0'żgkr$YrK7ێky{[lӬ:3c`JF{ ,ˡe- H5j[{0LX5򙖕Lo+6&Ulj{U*yoO1xK1)K%izsc cю$a[W;z2nt&Wv3Vv3;J TגvtxfdizJa2=tW'L顗F!2s@;agvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvagvjD밳gtՇhڔAO Q zW) ZKVM{]cw@s^uGH7'Ѻz/40oisMSEi`7Ŝ Q(;/Ps]^Pr%7$CNJ2biptk[5ް&皆GU4»B6hCXjG@O&<_'훟~nRh)CJ9PB zç JQ5bȕ9Ϻ6o˗x"gaMDYUQ` Yb ^M3%I)KXK !,򶢼Skq8M+Rk[P p1hg1x``"k{JHaetW't !D#s@:Xzc|p,i;h`F2 L҉`UŌpWyV?bf y y y IPl`YFx͍"[R9Ža\(GIJZHnяk;qWcrtW#-]Grx_<6 _&Y^`;uB]`WY%*r%,ʬ,Md.rHIxa#:U Oe mIv~+;X?+>=l(%%&.Ncȣ{mt:g*XG*Fen+l= /36O]IKRn]+^o~:yX{ٔ|4UC}Hug Ų7x~Y໿- *oQ_,} [%nChJU.nV&u+e^ .sh)᭫3ynm l)mv*aO[ޔ2"" u@YѐV(jl*G2jdtC[+,0]bwxG^my|tK??2-`WP5 q.)%Z#e%I?䒵A3V4 p:ܢsQswF@pS&R>1)=".U7*PIh7D Y[VvRڤa_ ?\ܚkkES[ P N10I$X.iUp񠝉$d%eEvJtRp3 1CA %3Y'IDA|BG\;Ba(I(Gg賂 'h> `W^ZXIWg+qU V7y.+75<ҷ/K fszev ueпt݃)dCs͙,F) Xz̠99/)n'n4Jd4E$l56iINa=u]*RM]?ÀtÛ>S(}Jw1Ŏcsp.} \LG FߦkV/OǗ*o@௱$t49{W3Raz{l!ifaJ5#;MLn/"kmrwWgzg`4jޙK[^b2.v)PVr5n.\=^.'lmI-]nkmof̲E7k`Ų@mv/^Nno[Vk:*Gi#Dםm$5RC're1DV>WBcQ@jpl HR',lwR c *WQ&d!(g6V6[ZvAmo ]{FlN=?^Wr7[7Gd /zr/HOם3fypk ʜ=9kRqN/;}X- B`V3K[W齢?mw$,qi\}Ts_ gXx1WHu_U ͿK[|P7`%Mߩnz}}7<"楍v\\lN<( Қ񰡼&pO,F5[dW#'Agt ^zg]8o|G}rzB4ZI"ֹJX+cc,Ϣ jKNe*;վiK<y[ړ:`|o1}ąg? ::96ҜvrdN%+ Cb^D#/,)̤&- N[4+Ԑ,6kq)A,@ 0yYhȑ圕iaq-Tz5ʼnzSQ}~761FcUXT+bsLNXIEȂ0^.%z  *I!6$m4torlG]E]E]E]Eݯ(/I+I+I+I+1 I+CI+I+I+I+I+I+I+I+I+I+I+I+sä5HZiHZHZHZhͬ#øH>"#>"#>"#ZZ`i;;;;;;蔵蔱2vBhNހc@w.[o< ziq;3S."4Ll/M2;gŤJ^t.Uc-tjijLjRwf7 . C>zRC%CW$xEW$xEW$xE $xEW$xEW$DW$xEW$xEW$xEף9waw( >q7kםtf|t&ЈѪ_jTtӴ^:ל_1PWK+ڈmoGB)eұҰMTBY9IeY»0kgݓ {hW4 }];f =R./NՅ^럟ZjRk/Ma/bwFhi c6lf|uoL|z]ټ0Wy0w1xm$ge h7[zԴ>Ǘy ޠP-2W!FS&YjօWVHΔt gZȑ_aew6_{簇KO\OYmRMT)b)|I8ȖX * /$Tg4YN%&* ߹<[.0i./0]Ok,Z0bx, U`&O=3yB9xeW U0`O0vUB{̂UB1xeUһZf'mDʤvva`^WYږqYA/ h fU][ WMInL#H*i}:_Bf1+4X)p#Й:sV &-s9ޓ`B&xdԳ@ݨȵ2 o7cFL[6fl渻a3ūyd3%}}u+7fRP̋ oI;gz+:a-wʡ!x?o!2 _=3%Oz:rP%x.2xwxfo?6y~A0"<˜{!S kㆳߴ;PXn0(ZIN$KXb'$ι!Qs}NeKkFOs1i:JwsCoquemɿ͚{6o7?@%:1t9LBH.&O੉(bϵE4iH ,­=XhitGP lawݙ4+^jϧѫGiG,]}3iZǎhOϳ^bn) :aq 6aB8eq:Q^]-mGPεFK(8֑O\4jEp8Y| 6\=!'&}\ń p 5 F&rDokϨCˉ{fT2 J4R(ךZ VTggH?ͫy')>W׎r8#I#8kքx]%8߼3&H e0"NWB Sʋ>oy vw(Ck3ۧ8`\@`?F߆En|wrwۤiצ:|\MfjeJ÷.j 'ݾ-Ԓ] yRǥ.R5tm6-2I 1k4Z g%B;.zMGj/8[\g߻y&Yj_ՎLQ&5.pEp>cGFPdDnVlc50we-~p[Mm}qSQ.՗܇z=ynI=wQ7z&o4'+b꫁bdrOvw#S+٥C12JV(;bpn̒85jy ")U -6fQqM(E߸pÏRZ8F*8Ui"1Y+u*ʱ+;8Iq+r:8ERȼ)a&b"1PXW R2rM4>8˦܇5Jibu# `T",bY#%j =X[8N*3q:3e >uiD66zy?_׷RWGFaI xԉf <2IN|nY;d',&1awlzlay;"SwʺJK:Q;׾rt3lxa5fi=q8 <~Rnӑ.VrW NZ=sԦ|&u4)a?B#h06P8fXSJr=b+P`hTx0#W\3V)-RLS={UQBz T`Ff^x&'eDӐ+eZ$` _2;%#R+9>H+''ud,3JApC;q+#~ 6R>Y֡޵q$B%ٌ?dmo s'ؗjI1E:$eGY?3I"%JQ=lKtuO]ڙ2#fox$r5BTdS6+?䥯u 2!n&P.YTAtVB.'+ tYrlWZ$bV͜a{X'I-{.(MdbK#L4!cAF\(cX&02V1Z+0qbdԪLd%phgl)CΘ/[Pj3YF-Y [MDkxZJ (0c $&EtDAblOF(?t"Ц|\7 } $큗h\ WVQ,y 0%"(.)rZZ0w~R'ͅnSDD2A Km|'ײobsq$9hJj9d6BJ+Du)\*N<9hC*e_bb\ +jtKU2O|>ʼҵ (5(i'.dK&o&Ƶ]$̻<qUd>j;!jaOΊ_`1 8+h5LM#&(ZaDėzJuINqg$QDu5GW,E&n$h}y簔zLJR}BV fuo~U߄;d= l{_iͿ#_4co{oz7KOR}J*b6e7kI2^23~ЕZh_>Vcq (E q]qRR҃2У='ĝIE8. Ys((йl<ICr %ik7d9LݮQ9C yhNx\9_в|mgҏtnXo=*f9ɥX.Ĥ1sD᝺S쒯?MݯZe{ǽhep66.<[0VSBW{ƅu|ozj6.J6._`W~;WNz?Ra&r:H'JQ:(J"`Q.xVm;xhr_{bD_{*mXޭb"m+,b@4: \t#XT=W ,%o39c2xMYZo[WKGU_cTzf/?HU$Bj&efl+&óu95s}ϗk;+Og &}LnURc)HN$u?Ai"L.g,&Í k2Ia+NјoRmrQkE4C wW>_a_*O;Y>$QHQ4F*fB,GM kC\s;=YH²66@t+dL"( - SBFp`֒!BhChCnX:d318c*>eJ"H1Ep KK+`A!ckuISJ^K۵ڨ(·Ǎ29- @i΅ˊ[MF$([-MHk )c9 TN<\pQJ& d=ɳ.*9|:(y-W]_;.T{Dt)i!p ̎D|4kxRDH:]GdKNTпk uvdufk֠$f8)ܒD` A@#u'>0Wk"=pКn K߄)":\3iT驂dC{ LV栵GKvIK4m4m"H:pbʠ5#,D"\VF! 6 l4 uԲ =eg pJq `ڢfIbDH(3-;6 g} {睊 ӇY) xqOMEXqL\H0TB#$M:Oc`1rS͋9r]cO/xa X"-q5,.iAݴĻE8M9OÐ0BoH{}"o՚ŋ*A3(ʜ?LJ]4j(#li)[@s=]^s{ 3үu.&^-ĐlNĜ?vrڬ]  ~(|p4 K$ڑ^=aa^;2%AL8:VbArŘgg&׏*&nԵsˀ3)kb GP}1&{%"~yolCcߚ =8pJկ?}ޔ~_?û޼}Dž{޽;:_ L0#AGЏ ߿CfCK6Z6g=]MUmNy͸73Z<|wl %rֽ)N I^Y]E]R/;Bdϫ#TUV{X 8O q!Ņn5@vtMo??m9;JdF+`$/ yD&`Ж'7tn >fa6Isq鼪v1ƶ㭫qprBY2y:.qF%)(i6vZ٩խgҺ!;{fڤKR \0=O̰=*DܛbJ\R]/T+Sr[f*bx>6r0mШT|pR* U4J+J.J]gLgYf Q[4:ZdB&CJbC@ >*cA Uy A#2#g*A?A>L ;ܒS-us[wpaCnA7NĖOc-o?4>q5-!Ny~NwΙ{&g1kp2%&f N?N!d}f2V*xC) 9Pv1H?UM{XHo=E{(K? IڇOw%`̥]2r=&On W,' haw۴ȵ^ڮ{ͻm~04X.I,;<+0ٝZG?zsZ7dhƶCZσ z_4ag(kZЬ|4=]9ʠN{sU)W!WR|uU`ѢPgn2vX+ yI;,eS *EG}R IsZe*nYSZ793Y^mҕd3*z}׶,|okG0NߪtqP-უ4GO&ɅC?&tPAk\rCWr=rFgڬ`q]@"~۵2 JPN AY?QEzeq)L[aQ<@KR2Tg|9J UB W0U1eVOz+(_KrЫ|5sjJm|/0_;¢iJ.1W:`te2b:TZQIĜEO/hv{vFўnblMwowH$%$LFjJL 3?`[FFj*[y J!Lg'^q<ﲑn^܍ۙzʼnD; ` R x$&/ሳȜXt?WX rNֳӽ'И'H{ ʅ[OQ`2F%0A$Ո‰@Q{gPf+ p*R-Hf#EDNn"Xy4cR\Ry)cp. (Xq}c Z-CiE:w"cg$cД)EXO kzq+eàr.<WC$hP~>d>ܛp{7xC@Fe+O2HJ9u*Sq:OہN3vϝhɟ+J:j(KDQWVy+L .k%GA蜶YOL#z!Ulg# be}0zF1тm+!R`==$9=Ω5`M~;%L%htx|`.LU38|hڴ:icy7Ipd`F'<:4J&8EpZgNlBgИG"hZ ;6`#*$;A UO8@.rKJ;n9g+_oس\Zh!(W &zxu~k&F$S0*P˽?.8}VRIo{bR`\|H55 f_@ʃ/K0D, Sz1Y\.%=+QA;,R \ҳ`zfu> t9(@HI $]1IcTIII $] t.@үӄu̇&>Kۻw@/Zdq8Ocጽc6dz"b؜[+AޠA}AT{])gj6wkʂ͓}r^r쬭" bJK*1ު-]Zwyݧ|:Dn$m3jb߶'52䒮GǟZWMtg[:]|2ͳYw-Srkn7MﳞouVziR(O7}=σ/Wn6!vY;ސ?dMvgb<"L1Ǚ"Iuts+G9JdX+}(-Ŧ4}ؠEK 6/yzA+6bv^q[q2u'Z'+-Hpx7-˃Z;!%%{cNoK=5@G_1jrzA&\@ wd!ٽuH*UU4?T;E5pU17;/ޢhom w ,OF].5#$Dف$o_ }FWKȣûXy+DαM Г.@X%*>V]{=mcww;IcJ%vu=ˤ_'¹9Z<%%Y~T7Fw'IZƟ}tK1?[oVd:Bqӕ:XO(_'vˮş35Am'EwOúq˴z|ڵab}}[x Lg/+rS,Kp3"8JjKI QDpC@J,s$#9&)" Os>; 0G:jтۓ*Sbl~xUBӋ\nޒ"+$f ϺH>`8PlD}2QcŴjj'mDH6v;$DfҶ9$m]6r=9/;nO[. ߛG 4-*KpЪǭ3W8H 06<0ImEa5ԫE(Π_ &5lHqdżT(DDX_Z_-thެx{ fp›$r{£h2+.LřGM'*ĤD'j 4إT{O*yZjo;Lv Jt4&_5ܺoR HBbŭasT^y å|&Ϲ :Чϭ :S}'ZZ3OIwک0;In<Ng3V#$ J86 9+93N()yaR2oBJAY&618sYfO]+Oallw!z6 GQo_(_s tJJ͝;_S׼dxWl(mzq*]%hqjIr`$Z<"{qq$!Fe#n(F 2Hn6ݐq\qC= " ÎڀPHm0H!iʙA@$002hj0@$"MTQ$rŐ*"if9FNO9+5FMATG>"2PN)c@vҹ FU> Q=1G"pZݨ{'ˉ9MP 0UM[C^;KEpHb Y0.)b1Zn>*~R{[M P9o\hc4m`u4JL`d:E@; 9|8l._mcchw<_c65Z#0={ISQm{`^&v &1FnDQz4E{ ,3GS[`QZ`^)xs."mND/Pp)Xٹ_6UTfSbcr(8hr0\R,3YW J ՘[S@5G>XcKdsltҡ!YRΌBnxw ҵr! %64q!H\땟Vm9[V U`he)1H+hK!7#3:1;Qn!7m\YnS8/gH!T E㢋Af*1"OLvi 9%ʘZ_nOWf=4=v|fm\HHZ`)rN0 0-CY0+=ۈ"om Z5&K{ӛJ+vf6طſ92o>]9wZْR lgQrtѡMJ>'"mNFYPk{EUIm$lyoNNNl8ݼ4~8MgΝ.V:U h  -텣2)1B\0{=_4PlL%0:[E4*+ ,|T0Ș<[ﻐK^zdX`bɂ*UB AUǭ.F VBT!B2KZ/iݧ=i on6?h+8s-)r`ݓI?“I?-}׵~`H!%1`ݹGs3d5;a֯?_y`2?x5y?ʋ`#e#u$"ˆIϾ}y[jOfGveԏRYÑ UOFdW?b>ƣ1]>Gq*?YZ"4i^ż<rN {"P" 59\(SSR 3|(V5ʟf0xt|̿-b2@ JBwX?%2UZIǝ^2jտ.^sdMPcU@G׽C$Rq'@=ޯ^:VlKE4ZЀ'ӐO%|IQAHOƒ*z*\>Ri\o=CKΡ5ۧxz*ꊩrO5WWG]ʜks[;@Wa~a懿Cet+ &7`-BI[G6HQ:B A%(abki ?REb%G_Ѻ [6~W0N2[1c^m߼e._,%|{?!..Dۅ|Ml65V܀x^&aZR9^bs&1u2P82d+%. ]cp %;%rQen3 mn7+ ROnzr:s2a9W++QdsdxouQfΒ@Yv)]O䓌]CZ%T(SV*=}Fjݗ^F)s^:T BɆ~0L9骷Ę1J%f"Ą;]]H!n]Hfl?{"@ u{  բz\MȪ<?\$&,뇪0L}{U}0^\v,ﴮ6%W"1YՑұ1PօBj!V,VWo絚իO//5g n)r$1$~{ŇZ` JOg9->=񯭻_;?7;}X\x.Zҙ1Q~]s1~>|twG߂WGzOQ8:eALy4nu`]źGBOOڸܓ)2/X}yÎFۃxt//?R?~ǻ_xf9p;4ЮyCr֋f\rƽ,>t"b'_V9]#ltkvٮVQ]U] -dZ-4TUIFZC l aThQ_k#G+< e}"k ՙBS$yO@ɑqNbkZs}l{^Y1'1G (F$AdP^}PlFmv:9ؙ;'TaWϻSmy?թƪ3&]\b3-cXUI]_^Р NA<. P5oaaZa[6/k#w@;\yf7O/E%f;DUx{єꎗt*Qفl(Q+mBr$( e!.!gaQ.t*+rnIODa077Plu(WS}r͢xCϟJg7Klv:isuir`FGVmQGF #;nJ`VߺKsw( w~^OݽƔHm/_>䵔Yޕ!r4w[`pۼȭ^tlzwhySikplwS]^*5W݀6sf#GBZN{CZw)e3ڦ^*ueAM@fߜ/nlnzX˪ǯz;-bT%@jidK򩗱=Ig uA?qG]VyRW;ӋG:pD}H'?>̝gyS'?G?~Ё;%Iڦ W ! |rO%OON+vO;}25f J+v" .k*oE,&Վ((%F zL1]d X -]H,1HEF1,]bh(C=&%h2rGd >Nd%jΧهg{C'&trY@z$UPdTX"F5( =֤d1I;/I,O&KϪŻTB\^ZF= UaKȹYY_OYd=YzƖ m,7ey][cYJ)z'K6] &.*rP(cMQY5Ej"491}ud@t*g?OY,v* RI*}>T [i R)HsHjf^xr}77t<Pz\#tó&ڵ:鞼?ݗW^W>9XNjJp")W0QBjm AD1Rл %tLAiLBBxP1-F}zE^'/o %\vz_|Ǐy:=H=gg|y3DZ ǫW5Q%'Cr_Sgvsծ5F4Q `KSkG>xDGMHPL\dN IX &R)$<8% %EYNal('\prN)FUgHl$K=a06(1L팜-Gk1h=BG޵#E/{l|?ep;d ,60(Z˒#x_eY-RĶ)ͪXw%~5Q֖ζנn,ZtYgFS7޼Y齍\)(%u:zmFg()cXŤ;5ʬ9eWfY!,d1W{Px}(C#e'`Qag3z4GB;.სX\Ŵ( [nǩH&v =AOG݄*̜-k՛E*(4$iNYdaڨ5gU3%C-T{*YrЛ[YTy4S~_F\]MZXb\oDy*my"vgZ\%*fS̅Ⱥ:?bJq`XBD9;!jdWNgԦasׯZ\Ȇm6.*ƔpZSr S୥ێXb e-N{tg$53Qxgc~}} Sq醉i{~jLٕܳHS\頍v6ga9]oJMɔcoH⫪xi*9,DE_ۢiU>4u/W;MRݵ(/ E)oC' ̍?Or 6[Eɏ/`$nGb+Ŵn'f+ _ fX9X_"8avҜD;2=ܠ @mj o]sk#O~vF~mU򉠸F')~HQxY"J=WiJ)#H h,-FSJl h ! 47qɍTyĄO4&zax.&f!Eb.|q(0Ydx-z!&H&$=x~Eo*?F5a(#Epǻ]K?`x]q׿!&_~~'i-ZMJ:w0Q/J(Yn!5.ę_IxvBx)/D@s&-iLwɳnsE{a/*m=<ϓKߕ.3/^TuS}`|7&-6y^OIh:Xv!5\n8v8vyؙBz^=gF7ljb1'1 (Dp"$8hzkЧL_C쩝UBU,\ʦޙnLP7bhBE& g Q$)rTfXMal7R&EhN\n"T$렌BPQ|m`=o !R))4ig Г97E[oso_ %Yd˂vC>ej*vMy^qM#]Xqo'oYJI:c QW02ȄeHc$@$~ͅ[mknFeB>' m4|f|ؔ46F9G xU*ƹ8bJ/c S [hN=f`줶66B۰v8 z7ʎ5^=;eYO~l8WL -7H8fMW/?>)@, C)8^x\(a+c1j fcfb+6 DGɶKosjTKo,DM X 00&)GhCB SXaf[#١M-s'3A9i.lqfD'}(|s #v<\ kQVQ A ^MW"VDaUFw޾b4e0Q3>_p_1T,Ee0߶q4Bxao ^`x޽3*\UQM҉`UpyV 0̫@mTKTc՘CG5`Qe`,ɨU\QR$P QAm􀐓?0y4 $8T9,w-9VOyBJ;M@Nu C ޅX ++y+B='ڞɭz)w3 3 !R'`ht cjdDzD*(xdOX8tT[ceÿ^N ȉz2|HS72Ti|iX0 **R!ɜy ELlq&zj647o'w8Gz 9PBKk pZ)CTQg)po:"hVQ%FHI^Ft2!!{d6wX.;Ư",˳N񷱓Kۇ]tu~nֿkhY/#yysgow9mZp}SiÌgyaF|xtᴌ!= 'CeWN7-;3B(DIx)IQZjɩ'QD)VmWD|4Qi5g0{Ċj5T+7gJKah&VlL=.*+T䤴/q }?PiE_- N1qI*9ׁ^ڙh$q԰"mCR  xp S eDjDN/, 2vyN&.ZzAgg͸yƉ漜ﺽ8ja}h./"Kw_˹*H0$H){u.at<3s)N9ՕPb)ޏ+dîY&t(Od80sQB9agCvF#l1W68R@x*/ufrq7ctEP_^QqRi;:}Z.,_}֕dKسOJK|R r&3cǽYب6YտьT< !|C_qOG^<@r虦;yU?uN|(o|w7^x?Ξ VҨ07n?Wo&pk\ֹ=,ocն$naY`?(m`}ѣ67񪇓2!Z}V+:KPHnX`rw#_ &kOXTۤnrW8_?^?} {` !/׉ "r4?zFӦinMKSO;|vE\Sm7|k@tw}w*WX&F NbLֆNVQJ7M,MA !_mleM<CO@ID9ˆH<FIɩ]I$3A )c8M1Jql)1'ۚ$.@& ,(F;.&jj^0GXt3 D̴ ;,Tn7zs3s j)G\r!]xƗ~݆BqjԤ!Ra1D+<>04U& >$%P$s|M |_H[WBNQ6-Wr9`^XJL:nlIBw횪32pj_~fbd_MF8=#\n,[4%\]½P`I GuhSc | [Gt!xgm< QҀ =؎Lk_LG&}4M܀ǒ+O9& c;L{o?m7A=~·njo?8d`!*d+CqǚcM]!ԀX0 )FJOYCɔVqzTřR#Lvb bM)XMAdtUCp-zRVXU|M-CZIi15::}։8.NoBc@\B빝L٘JGbޱ+F\3>P*EEXSܒ$ߚqRɊ|(9VCbkJl\ŕc fIEqBȚ2hC "JA[|1; ]YQl|mݦe>4ni;Y<&0ku,qq"mP⚔ڏqq0.5ؿt9ϩ_OpVT,>^`!ڤV4k.c܎́GCT;wl8ًQ:S=z}i+s>9O9L#$2CIYHT/Q}幹_sv&|uzyuz: \$NoK{j: ^{0 %]6fAF{[㜭/Ueκ%:d @ɀ}hKf5Zz "lM{7ay>K2݊^ Z2b4 %Os~3+'#.S=" d?`>GՇ=k+bm;t-7L+֫itPeR{ӴY@oq A-ף;$-` |tOv($Hj*ikr]O43;:ٱLܘ;an>H47'Rga s+],E@;Otz\q6/P ɘx3Xld:E3gךZNӗ*-&Dv2粯[zzóM1qXdDgp}uy֣zԜP`Ch&(:l1m)yоxt؈dc#{G&YtT{Ty_D~:\_0}6pyv:&Z%L 16X3_!3ͩU F7%.Xخ g7&jb94b'QQic>U%uB&bHI 9\ RbNE]@YMT/Rox&Zg>?-IBXwB_KB fcIWW =@M`) XЋ5)X=r{Mۏ3쯡~Ya`SC٘|OM)a.y 9-YsVĕ}gk *ZBX_TURb)+D:&eiZķh*65XS &L'pnlYdBCsq0+*͙⩭_nBO70 o?typtc!׊6Kj m7"\y6z]ʻ(-`]v}4 JF_nN是-ּ \1W7LX(.x-Wh1hdɥ; x^Š5of|=>*3ɓ鉂5dpIEZcS 1FbK (jLk_+8M .aN+{ y^=9*>s@*>&/LRW+އlQYzѰ&C)%BϠ1[PN.'v8ke{לkhS\CK&]%;u*Jf&f͸Ev[^lԅ⾍?ΚV}eׄ&ol.~~4v#q2Պ5Sh8&g}OLQ`CU.U>ك͖vF=8QUtUDmLl0Ab',A@(oĔ+fAc=ھ16b a,Vy+Ժ5:TBm@J91d!`^[ĶVH6"mH;j8`EEA ZBɁsxୗF(,8DqË?ū:ew C¡(8"ϒeupRg枷VB=Hأc32aP4l+Fs"'dGC5Ui6sQ>yv併pvQ1T(( myi>őЃXhO/L`i 6e|󥙗ڲ"Z?%?&}}l鉚Qh#K ASm>*&rј3곶YW(RVVV%F;jG}ucQMVU.sqYIE+cLEleF+΢9<pAiYbm TʈMdU*œs錳nl)|0[B\P[W6HQGXM,@*t\F!T2؞Z!7oezzߖ9<=8BN"8WXym5EQGՍj'~-x;>HÉf]A֢4(5HƜ+Ո7Տh#"*C.`-炏:ۆqЬAnŨ}hTFVC4&̕)i(&dFx-;n+&I3VȐa:nVqH)hq(1dO&Z{(|w{.R[88˫0%X^})Y3v#DH 1iU d֩(DvF*jʵ@K}8RUY~C~{-p!VJiB' hu5HىN8ң=7fE]{qy6[9*?#x'O|.B,,VŻ6YVH&j5+0U-(#xwvŵ_ԙXZmv{fc]ȭC[eE"wj+?=hڪ8m}u[n!KG1?8c~Ïq#W1}t?Vwu$L!ա!Gk&01,(ζ>#yѻzG]k(=حZnfLEYqdTH$#آ*[ NgeO# Xz= YΒoHz&lh"cCBLbžRJEVn:7pPqԹ.r.Arɧó6i.Bύ i5󜮿v}KŪ99&O9yat|Χ=.d[ީg)3WǓy CЦjĝȨ#sɭ.P9jXJj[N5hS[S!CW=/ULd~m(/cT<`-bQq'M^b̂-IyTFE*ږ+44ˢ7Xf(9z#g_}mBCC8|x眣XlPFhT%rQbC'2r;[8l8qձm ԦHdBmzn*Mvd2!tlͨi5 bYfz&j_;I̤`AʘRJzug'SkkoHŀ&+K@5*敡)yBPsZW8RpۑTq*CFڷ&Dk;& IQgVC)k(ځ)su"ZqJ!F5pW> TP!;< l_ƁOEUmig>U4Z9)][}UBTϒ:Gev--=(S9:z٪z69fр3kK6AMQqTF&Y)+{Oej/FRN;IGѲ$P;ylb"U"eTVYS"uZA-:;OZ*nWs%1e;׌Jڂ(/ZF rjIhdwPb9h813>;+yģ^F{]~V~6mjBiuB ŸhR E˺ƮՊ7G56\DDh;?͗3BcVE#MPlmY6 }J:DJVe4QCR 6lvTc vx0 yvā#~AYX)Kn$v[z-K&@-ܸrr㬆T_y(wLW !+8CslIykna `P 8'efÈGVbT5Q6ya :j=WUt5du,!X5IUeR kjr:ЭR#~d'6d:*آݎC( mV 24577.>~ox*> ymH Z`yR4ZK]q-.w&W/WsMb ѿy!'oa: inzӇNluhȶٻꬵs).Rfde!\mG4G݄Kz -1^s`Nkb:d&r?$(1spF^QxrT?ȨCa2T&_ jn .k#?ʸMٔJo3OZpRj3mdG(G&-LG?6~| +7x=W_74>U RVvZh@툴^i# iNx.:.;\%;( *֥cjH.C]5]GE0Vb "* *'fcqM,-(JÍ.#z,ֺ)q_ZpݕyT-8~kA5dϻ+ `*`@=\t4^MCFn V:JcCv),'-"_jcc0[+q@3vR!Mv \m!)p+ՇxZq*+WlXj`r4d{km:e`b!eoktbƨb]C 񹄨k+"gK9ߡre5״`cbx.z7d123t68C qAEVӞ] H"ȺͻfP"o'UԚLl(8>Pj1D[h?l'{>%{S`hŋk63ѓ0ε\֔1:B.>:E[@ գ>OiܚkkD i֖r@I_[&SqS*RHX TjT1ѯV{х5BPHAZЩLE!$5Y`[w=$/?_(س*Xۇ埞ã2`ފ7ZD&'XW>e$!C)c"o[ f677C.OPW+=y?_'OQ.>[D*Ŭ)9q FBn, 9w}r]"5oI*\&i2OM=wNW£+9}u˘8X̪*Xxz(`bOG]yx|xvU{M[uFrY;䅎A'Ծ|&N޾ӓ ;־^do/;?"Ͽ?~Ow7ݯ?}/gL~ NoM_# zơK[t, zȻ`&!oyͺ7'CZYvO3jDvF~kW|@W32)?E͏gC ]|EOQ KnqM xX_=]ŝ>U4DUk+wJW,z|n4fA|LzDk0&sG6o /-Ž94"c Cޖ*$dKhs$eJJو#J6v9٩>1;'ӵn1eR.Z러f_88C$5\bBSee/XU36jzFM.iUme{e0UG&_/u)SJY ɹ']7H_HӗKvt%ߕyjw>_حGt2(l|E)_MAJ#RT"Ԣ^Ҩ~,H Rv~1|AywkD*c$v̉#,s->+P_D%x߹9иCsq}gT[vXfP)R W*Y)gu.j,iE*`O! pō.̈ [4^HpLAH\3=YǓC_}*PJeoBA"㋳ (W]:B]&8v Wt쒦|4J[bes\MzTJ>(hs2")xl)ֹ y3ilSul2КMHe. RKPtH9IIj,MKNt̊sgښɕ_}ں_:<9Fllj3@v'bve,LDtrJ2zLZ5Zx FӠGJY2z aXR.k珱H76mgP ⳻=tmx 8ܛ0V?odTt^y*},ǕA*V 0%CTq'%KYd`F'<:4JїHm3RK'GVs3ḥUh4Bk ( F刊j;A 0UK8@.rs8lmF,ղl)j˲î(,K -w n *ǟ'X4D$D2̢"EcLE:p"cm;ҏM6鋃+^}xYXz;Zm\a全Ci|#m*(_b}[rjmwt|EߏM0h͡S~ycv~Uّeo od =!"Sfhp`%J7bi1g;Ja8o/ r vtzOKa_g>jւo)Jt,jԲS2`/ _/k]8 ,Yry`Ym᫛ 3VaOfΌ/YWL-Gʡv*=FԻTjn< D'2!O7^O"$\Eҝw8|Ql WGiJؘ%»3Ш+#}3yЙj&v!ڲABr{xo'1o\&9ljyg'/TQ։褠SWsǷ7݊Of|q\Z1o&IadD44xHq\ptJ( cg98Rg|<09\k~H5zkG8~vdMU:RNt gv [Ka)ygX5`hrkCQ{SF܅K , |?yK'jm46lr-໇m~nOg?-p*o5?g3i>}T=?8[sr.p=P;PltQ{A;UiTT %[]Ku'['**_0ٚa3Y5@Lx;^7*qf0Nj6b ;^y0/ϕgMJOرs#Ap(S.)AFU9MT}DJ"-"0hĥv 6KRNq,3c*h%{gqoR0#41'C*HNog=WԻm} \+uE|ϿK O(?`f.)kEy1VIgUIc1[qs z~-W4ӑ$/)yݒ U&1 OEӉZA.1ŴdRq5rկ}4Sogm=2~MGqF}kԦl2~d] &]Ľc1z^To b0_kڠp HqFeD2䨊;/pTy[ңa9$$V8PHMLï(B`;tAOp9Y&Kl2vŕ.kU_%c+֨g}_'EG׾h;Dpy 4XWhBgT'1(EXȜK4u+¥&L4u%ZPeZb88z<+Y5/)0.TVSP.zJ +1 .D$Z 'uFwN(|PmDhB`+8. åh2Xb(7 )d4&lX͏2аoڳ}.;QTY+T`JX2%oLyk}Ž}B IXIGeIUqeiiH(|*Q4:mV:]wR>Y}MaX?|eWwMw>yM^<-a-z r ӈ#&H-Y)U{?1VYE 6 Z ;6`#*$;A 0UK8@.rs8lmFW,[lڲ+eyR!Mw2JR,/KR,/ :R*lD X2~5و\~5وZ9J-K6FL},e)~Y_◥e)~Y.2Jˡ׌rqoUv5;]V bDJa*G1+c_5 XǺQ'k$(d8xv)Lp\528>s(Ȁ|V*C0!B*Caj3N D佖豉hj4BZ"2lu[_!m id+;:sd܍10[w{]nZK=&ܺZrDфz΄5{dF z6_s®ni7-}PTsj Ȁ?.&jST9ןFWvNG&ݎ9#}lwFUsZO۵f~||ƹc村!ϷymKϙMcG&n^[?nA]uGhn794tB PXn6m.(m>u $ }%L$aU9.P,3Yt:XN/qj2M kfW/C-.IΏ2oà/(|ѳlWO| k] Xxڌqͷξo=Uț2̎,+|Sf|%T8 yiKI VDpCK,s$#9&) ߹iz8 gigY FwSY/Xe=ӧ*Z; _vJ%]8 ,Yry`Ym᫛ 3VaOf΢_'과Zt_C$Uzwܰyl^Od&mC%n41E I;-.pBQc+W3ӴIh٘%»3Ш+#23džyY/ <3IM&ezŸNc|&*e>&Md`R3n3ŻN^9IA_f6:4ooOŁre(jYnǼ'|_/*;WJĈ!hi4b+(P$0rqV!<'>ΚJt6sar&|X=?L**9)gczWct]KN6 ^V :RHACKeB5FZo ,LE+n 2Ȥ/ I_xg=r@eҝ $7pDE45t 0` AIC a8 3 %%c88/LQPҠmPV# A4\ǙeFHq{;d#8L2t@88-F}76Gr|{VKASˑ*R̽w8J'GbT628cȠ¹8#T9 Ɓ} (b*.C 1l, 46 q*&+tHBAMRqHdDԈN'+TT!H% M3Y6rzYWlvC26Ӛ+ 2f DaQ!\&V6w*0DA? d_7gA@Nޞ4e8p%8䵳T$C jRb'}b\6:y#0B{4Ah( ]nG~BvH}0Y;' 8R-1HCVOp$$ %ęfwMOWWU.z-KAf0 A;ъثͻĽ6{y7Ax3FMt:&`dґ Pϯ'@$yhͅ;nk#]"}a֖}ϴ|fmPlGMpUljG*\s\ 1xK1)n{306Kٺq%mrm;Kt^Y7XUY}8.]Rv$t&Ol¤4LMILy6% ZJž4Trޕ4|% -'C^->FCHR C)8^x\(a+c!Bw='AjTKo,DM X 00&)GK!ɅphC+GP&I=*B}KwZ Z 57>s%;S{Vqm,%J+ DYEf&%dZ̥% &siGKp+}x1o+["쫟}[lxG_0ղN^Ʒ;1B53%7/&yU hL@P+\ v牠hΠ y>EBﯿÏ{՛ƛ @3ƈpBx bS'׎ +TrQW&TB Qy7h$G4N>k)K(ai4Mte.NTiænS:4;4=xvӭ~~Ǘ[DxSm kZ]Gs؁2\1mfNu G ℻P+bJnl~IWM~)u` 4:vJ@U sO2H"=uZy":%tqMBSM@Mײ;'ʆ?bmk"pYO=Ggo,qG[8D%+4Rf") k=bĜo9-#Vy) /4,#h%7 P/ɸK*w_a} >NW*QRâ׷~?䤊_y| on,oRT%h."jتTzm0"8{oVI!Ժ(Q8㟶]2:Ř<xҡ?1=L0HGE 9 QQrp Tے5r6K(Idak+'duf&k"vz{|*dPi /~9%gΉ& 9$a1J0N I1ajw{8 s( dɕJ9l =blb‚*H$hN[l~4 HV}hYjNjw QҹdWmT8hHP 1|L2.hQr6?kTc]1k(!J 4@ NmRK| 4GS!ĄNS [}qZiӞ5)6bDV9l~g+M7-bV`' V2,fLj)XYQ`|tcKe Պ ³H qڰ@̵F#cQ_:pV0V9F-/{يޠJKc%B bU~n 6gݦ/P*_.g\E|v0~RerKg} LHNfLwc{WT>* H\L$iXP<& w\0kMp`gh6vZ٫Jo}w}U.&e :$j+wB_u )O&&$ ) `\>Fu_O[K8JOVH ‹RJOY Yf9^H :-Qy "$jD)vvٌn.6m5Iaxâ?9u(P GM'JQ9vh:RCGTwbߝ qqԭB&hY+MB8#Tl.2z2{mfK[`neDf\!W]cJy`(|0/SX,Zy+7esao!B9'ݛnU~n5< W{2'._vwu"=1ٟ4`0IZpeMhʶCZObdwH  AYu>|>JYHAH[3u DVKod>̼ӡo u^XpHk9@+IZYtR+eB\%b"tJTģAYE}+WYnofvzI׾]qӟ9 2}w-B-4| YPFx'x&2, hd]Ybwpp*xg8F;'ٯ9F4oFA#Ұy䋊2Er < jɀJO]#gv_;  /J'#+1Ype>?VWO/K咅c v-4QHӐ"t,u:gKə9Wz b0:9⭱w^UlvbRg<9W yy圿I~8&Q)ML $NrT*g8P 8m>K4AIn=,8lPG͹՛yiW{iRWi{#,Ȳo:KZ$BBC"|GfZs&8S1OR@ %' c 9Zos8,n[TY [lv({<~.eǗ×8s gv11 ˺|uČ1#3 /'|]1Xj6NVJm6flI۶!2O׍nw7꘏N퓔$bn]X4j,ֿEg1MLLt$cq{C-džIp1Ylά:z Z1A[Ÿm39 #ꨔrLj8;v҂DN:u_U\CShvoEݎ%I _0X:IP_A]JV{V:X;SM!UyW/Il¹:C#ttRBA"sY=ϳ{g"ڋh)r-@.*e$)TDmk#F" `jT EN7=%(stpQ2KR'7EF:)p&`0^+@i|ьMQzOv>Zύ&+?M5Hs^{z_Kxo} Blqq*ntN,*̉E%׫C9`j}?T*7X jF~qViƬE~NVSAA.:Qg4$"~{ۯkq*:YCPD h,*0I%dd 1kLSs @rEX>,Q$􎭼b (⚑9yS Su]vJN2Hx:^:߽ ڎ,5A/&q+1/R7Z@6!]wX9+m"6+{f *)I\g'.diE HXE:`M( P`xAlRPQ=Ӯz^o͌8`:> Bgq?>N)f PUJ K1JF8^ _R b@4O'tcw}!.OV`o{ Ż V­w`kg뎸]5Еڃվ2cxAK$lR¤"KDQ!ΡD37^ o4,SAm$o `IYn αuֲmmF"O.zo7olhɶTZ{%][=ZxXF-R\i6)! 0<SWg-) T@*V=jXMGBN.؜4X1VR5c3r֌fquQ]y9e!٧\SLf~q~/S~Cӗ/a Ei6rkcL䬠6qб qc 6EM\Y9kx>I+hfܱZ= {)h"0Sd1&ǬbN'F:DEZE>rS}X<2#ClEd˺&eAFe0pdNب69ި_Բq_4b3V#QqЈM2xXd5k鋱EU )C#-aGl*DR~aAI$m%%zcqGu{]v~mO4( j*gANVy%dWjr_aV9 8>R7֧ N)U:vH lU"m}O }d ?F[Ѳ> ;3Ԛ筢ì6DV& j9~3lN %.v6Z2ZD)PK~h\>f^)V襴Qc`,H6% 7#g0֡{OHzZx9-ymT%:S+o(&\I@JNzo1A,D XaCS9 B2#N5 UPy岵^hu "xh,gPz+Es{~YDvABVNҎrbH^>Fee{OTD@O1I#ٸA;VQ@"y`2 +YeZ͠ƃ4IKܻ "f4m|"d<їJbIN XsAq4Q{sOlwݓFlӪME¨.6)741Ff,le,6zuAyDOMEMbW3V\c[)ŖjhjI3gBSj,07$^S!1 c *l+t4+@oAotm\$j" aQl=_k/O_/~E\42utTo4Z{J3R K730/3I#'%+ U"sVٹb<`h#%P!%_.{aYo,[vj(dX^kBIE M} ˦lzcb#}lATwk(Z/;`ƾA >_tN&ħ4]Z7 C:$ϺߒW-!4|7tU.C S[d]NFZ~kڼ %u5!sRe`EſQ>$B*l2d6eΩ^M`r.s#nܟUoG9պьgRGo"Z{i?'&'kʕ^{\~9/_ _Oeeߣte6+Zr~r\1ěj7"ˡ (ELlLhMijrƐ_uirxCMav~ ]XIYeAgT"bdPGMlJiY"r`At` 33Ic?X8KXw\ L6S /,)P>1dP>[ޗ@~а5ִ WS!) .,jse PH aTW!A 6Z_@PDIB5I%ZhH=*LN <.Ͽ'Z}gOqZUoٚ^9ɳX2|YX.}`1ЇzS@>u岸 _M|;aLV״Ytl TPRH8:aX܎ttJ(}wymgDȜUGHCM9nrY!7}9w}p]=OS_eT=HX}(PwG˧g0 ט])gˮY#7?贋dVZ`ڠ3b|p)c].t OSg R Y5C]֕ Eɬmѹ֪0d rTS>glj=[̓YwS!}%5ƨE]{#3Hn+/Wjv5n1#c |=]w<^y]dt]Ko[+Ff0s:d,{p1] 0}7w >c'XJ)aˊLYrIl:*~_X12sIhN)Eŝx \h:>@? (Xjt.h?:ZDE%˘hMJ*Q" ,')W3.M9 Eh 9l -ʮ./ ~XL11fDKZ0iE9Zɋ%7#"!~.{rJc!Rjxs1!)D%E!G#LԌ$ B]; #0f፥%ծbӼ(BNl8A)z3NSǼf1U"jklk(vީLΖSױұ< h)GеXHmHΠ<;A >>4pwfzh@lN7CR6 :9xAeL%B|?-=Óג[SU;wrtIsL(M'Y9t>Hk&'H4꼘w^mޭE,od@hC A, N0 ֋ΖN~Iɠ0rY@]$dT`ZF>FʆQI;Hb1xhڭ/f74-r`;?~~X^KO'%2J˞s5{f칚=Wg\͞s5{f#빚=Wj\͞s5{拍;fҠ{f칚=Wj\͞s5{f칚=Wjvf7 >djY3֛oo&]_B )]u{&~N%c;N!D!SgCQH8 XR&fBM*vxAR)FE-2ɦ.IHQdt$(1Y L򎱲&jGgeaN6g13|J `t?+OLUPzsZ|as_>`fmALǫ=^UH'!˻_ `#U4xq`q bG-֓O:NCEqQ26(cK5P* įEm)L9 A9-UHчiPPDJJw& aj3qv<cĈb諹MbqC6c ̦jxj_;\6p 6D86wXK^35i9flkfڿ[Ct꤃WB >+<~w l]Q!]q$qZE#]tGoyfbI݈w8^8ˆxapԨz\_=5200h%C$7=c职,=:c(MdZlXgZq#/!e$"2 9Ა^?\B&P*E areG֑{|L*#MNZ}T.tt<&7} 29ڱbDm=TA-T-`%bJP$!tms#$4 w͙;<tHɃ^דzƃy7m.5^GA{ "hYSX =f5lr.Gd1:E*e!01ιh F/ȇl aS`Q9([sEK,&u!X *Q3qU+tJCHkoiϣ4O>k=| ^c7}}J4>i45bcruɓ4͟-6EC̜qе9lt@gE%;{sUcSeeKbUT1*c( d =ܳЧB(lCxjύ&^zkv|ty$Mۨ4C=r< X?dɓ p%e]^tJ?x(Γ%gin+ᒧ[1~~}ޏ=' =X­~/u$ZGTø@Nx#t`NFЃҸzˆ\`Pdheցp$R)jYbQƬe`R9xc.L*Lڼ2_gx IɾI8dQkƖ/ȧUg]RV(Q[Ƞqhl!I@^񏮭ɖJ`fز3Ԛνc P&,I&]0 1 B;+&%I.v!6؈d2DŽ+)+] vR`tdl@k&1lCO&ü5OX*'EkS=q=e;P[Gk@-DKo?+ 25 ̬=vqhP$zeI#0SaR& $Y Bh^RdsqX܁QM-ip׳7na*"B=PE_rQFZDc zhe,;1V@N(ӡ҂ l 1d%S8$BB_ \;3L2XఁѯƲHrbUS,ْ-KEٝMQbb+YeW l=~[Z,+ 2f DaQ!m4U ;/4oEkGޱ 'Oi2 j:l6y,!)d(.@)bZA-78)ISmwa,9o\h18 a5:o%w&zNXxG#U{;6kzdo}z xe>u9ţi qP 3G 8BXV3W ќ2[[sjU 5}-ȺƧ6 *(6B(wI cPdrA(;1G,&jfi6c Zۘka/1 z5L 36pJYRNnUjti%J$Rʤ !؄ *+\s%秣&vX,Qd VSZ8S<'S*oEۭ(Á &slF/ xX̃bmX'ls< Pi?mmnp=pbN|:D7ܕ~7_mO_1?8ft#t^ )::o&d.%$ {pOO |K֬t/T?_o)~#U_Epn]_%AL\şp~n#i1ڲVͤ3{ˆE%@$K68pUݜ8S:8cW5!H K EZ:ǖ;(OtZ^M᳛a)|Cnu|*R/~\[0`&j1 Pc1ޚ 6Q 8v`stX6TԴ!&AQ>^il1}44~FI"6#*)dF |,ކ&zG9$m*h.օ4:<|keO.얥ژEyi#,7 ]_-)6) By C>,o$W^{;Cя^{oI={jT\wP뺯`[SɲB;]ls`P!-֐ UsCune,:uyif0%sH[]FS<2꺫X{}Z&@ĵ r B($W1pV{tH_Z+y_2cu]{0ky~4RHc0%`>/>o mڭgGr5&HZ-EՒ:J,DgUiG\CRg"E<zhWGq=AU:LJ:bT2Cd G @H"H=2)! >ptݫa"CL##R"UҌ;K1`!AyûBBcPƯLXw^DX%G8vE!kn %s$Kg߉{./MO4bHG|t;#8J Z'IX aCZ;c#3`ʕGw .G\QCYAp׋+ߗo_d\?HRZ1?\ثN׏S-D/ͥ\ʈQ^T%דfFseڳ>+}4Jg/55f72_`8K NnT9L~)>Ҍg}ɇ7WYpvWJL0?:휞꛹I~Ug7nf--#],,*'0DzG YL+y׽LfEgtt\6{ N Ll$ A}t!X>EK]TN1X7p3Pǿ|>O??9N޼t:y?oO>}x_`FaD`xc~Lw/Z5547*7y2.k򖗌{|$}U@qӵ2 J\x:C64WoKUN]]|'_/םaKTRTkz3W.DX, Ooov7FG{ۦ&_xDDLڡLR)'*m#iJ Q(Eq(=uaGN:-gI#^eG%w #DʈK7DD;iKZc:LXy.n:6-Ui75LkO}I`91'il{sRBT쓻{㮒Jrvw,讘 =rWI'+MJ\/*I+IUR^*]E޸$⮒]q%]@w%4A^OuW 0ro /*IK[</]I_y?03LRժ\:ۙW ki"mlF- o_h|éKK >&K9T|{_:qIl:r0 ښCP{j~/DXW'c̱r4G;)@'iڜ1En**R+C6pΫbx޹6/jxcKWKe$"_jY"M(/IJyh47XAEj)[FH P#$̫ɽHrOyr8( ӔJⒽI<%iYUrV"$=XU|@R4>PYXx |uyl_^wxޙO$S*[gzLo3u֙:[MkZC4*[gzLo3u֙C4[gLo魳gzLo3u֙:[gzLo3u֙:[gzLo3uֹ&&4=_Vzznjf##IZXѩrڏ!ֺk&%W C+9?&j0ၱXjT ʔV0bj6骙)w[\tVۨfٝsro*`i eD:D@bhkgbmXa]ZJ#68R$#Z{1[,!2Vنúy8Z*پPJcpbDl$ipG`-ν~\,A K wyj &Oly`.Es+6Ln3e☃sѪ^#22zV477x_y5;yDAq0}vkYh6#!-ō# Jq&u7+~ŨL?TYKT,RU&ím;.6;|]kVjה&j5J$QT!QVZiiHKsQ4:m9:ˆ^FcD2Y+냉K 20+Cʘt9Yh`+/-JzVVA[-ۮn$ݷhjΗjt6>^:z)`p`dYLE68qQzS)X#+YӘGƌAc,6؀k1*h2:ZrD`p'-1q8e־_)B[T.)"G;,HnT/xxC4S<% ,Z2M )i qG4S4SmqR퉻֒E+\L6(&R] o#2qH8ɃE*f$CJb @Tj4 Q!(8v0 xM7 $a 9ww;-/BD1"!nЄϨ N*LS7"(f8,2p};S;V6" Ad)/{"=ASX\V!cA\DN ! Aőp.uXER%V5SlH%1QMKc5o)\ۮ2R;m0k8nOj2LMr)e5fL6"1)Aہtݘ ִ-sq2(wTR63fVϾzF_I?#HAR-D4̢"qy^.+BY myY msYɣ}9"i\OOCڻ2;ӗ>љ/zb~L f=ϓܖ2_-S='M˺vb:9a!hrXSULZ͖7%ծ>Mu_Ъ|ҦǪ!;?׃Kϑ={AEz-c] *:g5vnsCN}H`8 UyC#iD!xo[VC2&X1-Gas92&AD[kJ8@("5m>g @JRϊ%E/;0fdB1@̆Ԏs7qߧҼ;`2] 'c}3eg5쎮]>_ҝ*r7~OьoaO tzYnV3m%m4ϻkJnoQZI"e']59L]ӊ}>T3ϣ sLRz "̦hI1P* K訅[ Mlj+%(}T$\Ai #XQ,? UZG4g}=@sA$PmaL7x95 Yϩb>Ԏǧ97xl~րuZK!Hf<8N WRb`/~SE"67̪WvuQ҆ѿ,q; Ѩ`^גQ1c¤ xI'( fah7&3 @8IQ\̺dT J+"6NZ[=EtM2~tzyɡyq9nw/EST]պQ WQi6Pkz⏮>![R* 'yTkMH>'ymNFy6/ R8q;Ky,lsfS<fnD={DL?Mfy :ssF'F E Yjb]dɘAI`1gĶWۓy TɂK'@љsR듑VFp;Nk幠v381jݦQYL)4& !'A#QTEZS )mlM%P"[QeX2P4L RID616g7F2OhsAfq("ƈDqkl l'cls5Q֤Ћ`Mh1"Y+ULH!t4*#[ҀVy+1B:cD+qv#w_u`\.}i1.\ܦl iL PQG.h 13LӬgK(FZ'l[&F`!keCYBp 褴Ac@oelL=\C&F_'¬%{䍡1U_$Zk?5,/?[hf<Я_.1?TI@Z/bRA (S$M Y B k{hgnǬfRs3xgoXoP*"zm|GUR\VM, Ɋ+5Vw+ 8lt(PP-ީ8{γ~^}ٲE5O0N^iHF'rɵ]̉yctzFFG(=%a-oߊ/z~f<3'ʆ<M֯8Er1(R1!+EV3g74I{ށb"d^4q)F$9^ +>/If$A0r8Ym >~િ'Q xeJ)T HL3HeYbXlpؚݎaZ_D!5SC'5||R 4*k+2KW"0_!:HABVG"cgLCp/qô.lv&]dsµ:.ꇲtHMixb^ʛ~6Ns\ t$ u)tRp=HA3I(O~#TdE eHQ!QLmD&Q|,d vAZ,Ĕ"L| 8钵 EhTJjd::i3qv3;y%5G1!c>t˰=6dW/zj)OFYg@1EY.V (dZ^yI$EeM&} sMĆ&;l o]b97yn;BiKo&~v;ys}73 Ktx3-Y$=Fg: RztL E@7X5OjTV ڪY5 IKPt&:"XEFcqL&1 ιh /ȇ5fC QƐ6>怜w{Rl's #A?__LGME'z;$ -kk>4qp|˟&ǕvUIR{eŒջB<׵ѓWWs/DE]P+fC޵8|Wݲ_[ _?:;Z\^`WGxnd<`ͭ DZhmvޗ$}=bm}0YZP'ӫt +XJ{1pUw)pUEspU\^\!%+X+y)ps*%^ \i]\.]Uqb*s*K+DtI N Z\ \Ui=wR-\ҡ߭X;Xw5Px*YKNbM6TFRKXlztߎo8]ew99tͩ4dpjC dP9Duv^ O d z<qj,C=ϘȍM=_X<(]\XV gv?-?.C8DpA1)Ux oɧ7sw7Im$SDzߡC1_]J c`^-X1H9בbĹՑ:hĎTG YG&yR)%P 'SQYS '4ۗ(rrm%II{ y_::6[3EP:yz)ĚqXJʣ3 ]&W_jyP^=aPn*ҹ=esm 1m**>$0sNbU%C ZՇs=ZE;k:Ft}eǢi<;*ψʳ*hN# t1 %Z 4RH5R6i)m>g @BMI,-R$} )t0ScL4A2gDU塙)Q;+פ~iZ}[ې֨Lxz ݬscpL[EVOdJw]{}O,R9 ;ܺݼy8._u4e-ƻ;_Z0v[q/ػ6r$W2%H\^ODLgcv mIʶ7Q`~s/#W93ypZ+#W %^.E֤|}^Ф055I!d%y 5,T򫏃fp}qQE+[cC]+fO>,_"cKe C qk.6""/iT2i FT̄1BBe2NЙ 4괉Ȭg'>z_Kd'I-˾s>q7䁝ϘvFdݧ{_Sݮ@?,Nn:9I9X䏑m䱀w[<&/w׸([}Vhݡlike;\!`vy<|YwFΏ,XPP:x z0;Ŕn#Y&6lγv'L=]uZܣxK{ ŤLFrBhԜcZKKK`"dqLf1&KE)1V >(൑cu`f5r̢COzl).M5Bh?6dSv;qt祯ҴmL2)6/zk~Sul\sd.C!B֒KkϭωIBt oᝡAvɵ=ML /y@?2xuy |7GoF̿OF~r=:L ?uQ$?1A(~@gϑHCi8>o_H@"5C}u$7) i|Jy 5 hwP'κ'< ĹCl ]tFS 6DYveVQ17:0`uP`HyO p *Pnܔ.y&?5V9(SAcALrwĖib[4-ZyMl kb36ZlL۩Ϻ4e?hNg)TpM L1rB.iI>,NI6'$rI8Ɍ+sT>x6GyIYwQJr.27C9ª"I4nAH3dAz!PWR!\d3˜; $W#g.U)yvs[uݏU]I܂W~ "&HJ sp\؂s E{q9& .VD_D RIkR0XYPQE^4?:SO**WxNFw7udiamޛ%ڳ1tqkNmd=lV`K0A42@@!rZ$'L8,w~tuښR )OFl#H`<&Kr &͵$3)ښ95c=RMVu woҴh3-xMb\n'şh`peO~pLJf"(bU) \p5 S!k5;tGe(Ξ'e CMM@BduYCp:G,[{g95v%Tv5WkCe |UR'N5wIVI 7gF`U}6@2!CEGtMB@a&H:>!eX$]_9aO%x*Ue:i`ڱ +jIXьiUEYgPU#:Y)LA1*emLD.z&-`x`$y4B¶۞omNO~3Q;$uGJֶ~F#?RAI8e 75mHFYO52^;`JNzd{Xޗw;lW1w]b0P|~-wǼnQڼWLK蹈6ILVDf0D"#ȍ;NVܦ\V(cGB$µG)ꄂh"kq<'GnxB̵1[s'UOۻ|kyXiZX%/%h ! "Q1aY2[1= I:Ͷ"$4M܎K~#:/ox3uF0$9dc9)lܐD'A@-U*> +-C^g}Jdt=.1&&Ց+3->$ȣ8ND+А]cn;ZiЭeW)Gׂ@ʶHȠOBFmP]֊:A,H{X99YrMax׻"j7o'W75У; gqQ4*QFleɮ}R-y{Lѽf_p?W~k/r;}v=% b0ӵ~mEjo"^pg`lLhLVtn~PkYfw$ ׭,X,pp7_dé yCu:`f1A\FĊףXn};&{%"~xzIDM7;Y8܋Kb?>/Z~ˇ퟿}w~Οh&0#IP/ gLmkMMͧvZO=[]慚fާcԽk~\V|/8Ż^kS6&y^&1V{qv>('W#^VXuq_n]6H ?~S[j&HLCږnޅ fma:G#s״cl'~_' TeZL ^p&í &L$ G4J#lSec=W>;}kG]V.(9L2.6J]Lr&yk7AcEWɭWy~92#y j)%2G:p< 1G D)B*H7FsL.˪|ym9徶b~!{ g?u|O_wۧ=mo\׶o;6~x8|¹u1vۺ^#\7ۑi=2hLKvv|XY*]hurkBځuQ-rVEc0)"EvXDWȺ(usb=Ӥ}ZDK;9k«P,&] r\DkՄҖٻ9G)׺9~MH߯[65bõBVv=`3Q=VM V_z), jPQsfRj`Lx)]vU[K KIa[4)@T]Q*C;&2[ZtʎmrT=%U9*)D8d,֪ g^N-ӳhd K6i7qP+ayr|x>`Vbb ډV16#LR@JL^WmacLfd7).iKx lgŅbZR! ʴ&[ u- xO |LאZ|4nj^T,Yy] )9Tڅ!t)XcEg,ĢJI(Rn6!klsR9poJEJv5<ش,ݍiA$о7m]<4譹̢9,_Zg143ɫЙ6 XLſ߼Z-߼A>kJWGSO?n'#%jNvzﶒG 盓I p.sыdVH2% /l ^?޲WoW_kbT>h_b,ar6 ]zݙV\e'_tW{rr/P6]o&?,?ZPW' yf~:=-`߷.MDtHnGY/D=T.+iCz+<,[{yns:>z|!FCP>Y!&>*?s~GgD>k3#l$KdΊiǕƃW|9,:KTeLTPSE ZGX*hs cVmؚE,b~mMĎqΒm\{^W14^LVBl:_oǿR՚gOW:j1H,d&JڑfU Y٤%knدv&k֖9 |aJX}23 JU]88k< Y]\[z 6Ųzbt=# H;'GEGX]w5)CjD`Zf} a}&-]&%^ \5W&.Ugpդt8 +kQGpzoઉ{c 7 WM+{WNjhy5Un7o~37oLCɟ?}dz<_ޚ7epVРD|`ӛ\pYi1'p$eY7k KWʶiW %h10֭]dyxra˂HK{ t5(--\ mM{os@XWbܶTfd?:dΎtqAᐼ7  q{ & G!yX},!=?K%פŝkRiyJm\5D\f_IpդqW4>E!Vjj:\5)Ip ЍMezjcw1v||27ɹp"zoOz*XS<8%S4jTTs3?-դr0r%6AalX{Ո2=@Ք}9ӧ]húШ]`ʌmZN!(oŀ&CPřT~ĐnC%t;ҍ!ל$ )0QA;>UY m w/cbcv0b4$U7q?I0t2 H֗*9Ju% v(Нb|}EdvA -.k$Z}(YcVAMsU /OG={-I'@VU56bC--\XB^voAY)8T.tճDgXVZ&袱N/αYd|8om,Cg=&lW~jv^*w%%: xXZbI5,1e(/@֞ZRaT*pP?yc %b}@^j8IeNlIiBEȅL:RnT+G;i0 +A"Y4K9X d9Ă%r,ArtBB27:(A<GɖmdN\+xn+ww'F'vkJupπLZ+1UXWT]6FI*-eY`uQĚKՍ#"Nw6@vEMMwVah  qu@Ed  9E֭@M>jS1¨れ hGvpg=5Jk_$ [-EH4m cN;ry9>'}sQdfl/e:`Ll= Tbfb[sGf?UڪOfbF%q6Ut-aeJE)ifN@yە6p(YUcmj)ZU}&F'871&S%aqMLf=\u0~ۇNuVn< ίžXbǃ0S'q͓N\g%۟Ĺp&o:~<9>X% Z?feNڳ1rP,^vlvE`gpʈ 1$S!)c,%RCF4K۾w5 ")'/rT&]p%[m!6bR8#c? ͌M:c!XXx!Qk151ˋU>p7 l#v4rDhjB J8 fkK1b-V~/;c^7)#%>x5t7Z'Js. +n(uԴxRVK5Ҥ-Wn3e,Z,Ieij磨AJ& h:#5DmbQUvpki6o+aE,8`AIYΎ `1 b{ZÓ&ԕ]EIhN88ubfb(AasT>6[$28˹%@OF'T}+? 1qS4`_9ƷY)nb"\3iTC<^J)rIvhi_[O'mM5mej1uFD &$dBuheF%7Ws#L[Ld  1(G;[m`Zombez~6(,?_0} ?43+>6W"r袎Ōc$IK 3̤ٟ7M.;)oGP"S2Y_ƫ82PSLp%gTIv'< ˃ɔ\ܛWrt%DےW&j"N O'L=Em;3_ fhfk%۷ӳK(zt09[!z4Ў2i8)5+(Ue?=SӺ7k^̌kmuvC}Is28M II0 bN0'jn EԆ?A><~⇋blHhHJt0b0Jfy|$FL82fbG|'WcOM׏*QnԵQ' V_ɾZZ3z?nOb^֧?j$_@WK槳I/6]TUV;y54"C PN_ktMГo)J|i+WɌ/V҈L(4D-OnT;.d1:)l$V)Jmq˼f?ƶVpBY:hdʤAAdA D>RDVvskk⅔{γ/My㡋S \0=}9* ]>6רTb  Th%]&^֫T,3ڢ;EϣDr#(HQPF)v ཬGd*$p!g*A?\2 LQ &9% E8VQJPֻ#,WSG}܏4e^i~2/wn:z"t$lstxHSLK~ff=8&'Zq&d]f2T;C)T9EC{8n3Kr!-uP~gv0Q^o)fҮ,oNw%۸ AS^~JK$is!Iy6"cOô(y3e`wuv>>TW7'auq'-%RA3DCDL|wTLy1yw{QLJ.;Ru$!@ 6xVx$fft69m5C#sݪ-B 1JB$tV%g#MA=H2 w|`Ikl:0^Qpƽ9죝lq뇽;a!Wx*>=iǎj7xELH rDtB:G+1Z,ډ譳јĂej|q :΍s{- Q@FϭBG#=1U&MȹWh){ϖ^f@hr)fu:fTV.4Nd fQ/P^q4N BlUW/773ݛ?xi*t'z9??Bwr>Ƈ;bHˏ?Z v0*55eI iCHP_,y;SѫB~K6I.l{8iz][vDI{6ɋi[]yvXˆF]=ϳK^Yo;Ծ}/j~7sLLz7LbGmohez{T_%Ih8Ϳ'6m,ZeRQU٠.YcU(*%YժktByݙ^FD r#kt"A <_A(D3 %]vhi.D됪x,~7Wl0 -&K9+唬/*R.PӎEw3CD j+eBJPtʬd7A'g\J#E0vݮ]%A{p_:<w!)t bdny[lm&AaQ %t{a8H^l}A#d\03 b:DxhaSJ>Ohø:X+J k*UQV_7&`cpbb昄<&1G[4 3.ѵ"KT1eHч:,%g39qw; ´ S[#gK_l~ ~nCBEsM$ð_ב#g1ff5dMomuoTo L*QYpu olZܜDn$mR:ܤ:m&D`uE/Yt'wthsG~t[OG@"92XkT"Bx[b!^>\o؝Ga75orM#B4L̄ɎIh6e;n-wO:_;&q{܋pW_tFmʍCdLrKy߼vKp}e3s47o<%|*:븱 Hؐ cFhwN;O(za@ XBuFvpd6;lɺ +<TID>EI-V.[0,B9 Zgc ir'Zze|\Mpz~@du>ڸAx-?ӳ:Wy˪guCS*]~('"٠3Os|rr{.9 {B]W3^Gў*[x6⪐sW~ٻ޶r$W~MQdS?-oۍfHˉ'oQX,U}꒬SdaRaWW@  p:5NZuHክĭ+C)qBju2pUu' jpVc:\N;NMd5dA J'yP W[۪j?&7CyXnE޿VXbuaN3_sr1\ݚ59ϘA*D;0x#f痄?}{;tNё0d`͕djSo8=)uJ|B[ \S F* UJrv2,|~>yU/[%qK.4j2Op'GcM=9~1Xa8u^<Cj-gTMвa~D}!"Xʳt>?9 L4KS*`Y5Q@HBzYtd!q)+(dx.uHw뚵۽ۇ<^`d:&Wlr?%7z347g]]ggUm8{v>^y~6w>n 59P[TlQ;?4vJB Tjx _.&OgkDp@̗̐8ԎjalgMOTF^/FWL=U1OԠ)UeNɛTDԨ0@pZx! .ȩV+^$"OA2KD%eg J]HY:{,ѫ9cc-Q-]~:P/}mTx^)|q߮s9~;}1o&1_%OW?}巫[=2QځCT{ >`4h9Mcqzq*GASu4NX+ EL1 %"R)Nx %K(2fTlMQ$(t9R}NdHa$KN{gŠlc̜%LV<&* 1XhMi_g2^ۑ-tܓԺJ]fh<24YG/\'٤{yoyijp+ڟ.o Kxeo}އlҌck'oyxy?`geyp|[o on73Ψ94۽}ngSϐFoGn=6Y$<6Oj|ʍ4_Zb!;y @hZ磏ϣ䘤DvC(WȖ,mƨ@D#Lm tm+1Rim|$1 :W@I2HGD*(ti|̜9#ps+2uN/aS̗R ?;y2?RS9ZVz?I N'ޭi58tdj?A2*x5$ɌN0WRzуWk iDA2"HJ8x/2uVRR#eU&JJZh57[ob1EԸ|/:rJH%A=)#&HF렭WLЍIn 1σحAS -{@q1BK(OU&ʘ!F(n>QQF Es+c ɱ hi%<<%N jiQ(Qk&)Bdq`RBH(au\wtm-aIIRR1XK60T9ǜڳ7/ R9WiFƩKt,<\:Cse;tyVLZ|]bwnt:=3#vP  TQY2sjQsH8T6X(2Lqvg 5 :並-F KX8sl+s#vV9n6:EmhQ3FEDMGĎ!u"8A*V3 6!c\^$pڶc9BX-](uBPLH!;Zȑ%D"he~DbS3fX\qv\x߈X@Ō!{rHB-4zBe85,x>m$B5NͳWSu4ru( TL0@~›!*H.ϼsƖ'p<Gٲ;UԎף*MtYdb7Reɍ)X3e3cd=We,e J NJ xZ2bh3sHtb MΡygQY; gq90'o}VU_G6^:V]|yxCSP:*IO1`T4)EY0IDKPG{tN4NQ|ݞ;0?w?{ I֗\|Q,HT"zah4@ϼ`;Pie_xCvދ(<CW믃r>b6Z ̈́: SfT+^N{kwgbVjc]1'ɠAgOM%1ɌxvGIQۆ7WmxGQ iZRT*̺"IS<g2ǘEM:[h.nEm|z;whj c;ߦ8[j9@mb#DJk<R N`RA=ڹ5U5srw^"EXN[-zD.ݙ$5eГM^(Ġdg蔷wvr-N7s04@zZJΤ3r ʝ97{$d(@ EAY(C*LmD&Q|,d2D(ƔsI&KZ5^"S)iV/\G8|qt-Sh'7~s.o 5w< DTS4rtꁲJU K% L(/U6q5{|C, ۷U:4JbWmo^( <,ӼY]k_H2D0nx> ؐ)^%pC0 EZfl1l$)ˣRJIS&Wg&gIj=Moc摛cPTm5˾sk-q61y@Y eR$DJZ#Ar"EorKws_>Czx2%ʇ<;Ey6DysB-7Os#/?{{vWׯjcO;w57ԇ'2\ymr~ ;H۬=େysg]ZM]5af>ܸFzzHom^nefN^)To[8m$#* ft,H* t]F}O(#qS$!RmD#N=^F]$-8&@:OfR** r$(hZmSE~)(}R)4 76e6IeCB'Kr"Vkf΁|]ڲ;fg|V.`9hCThD,L6g}'0%/!N?/b-@ݬ\^$vfj} D,H9,#gGVv({?.yWb>Y_<}sd!Fi,pPK̲>t+% :6I2e/2_gOM55e:rT1[$S$eF)eVW>H(M^Ɂ~2LHOєR CZ1cC|]9Xi5V\yH'lVK- U">_NWsd^ >~7UgX,&X%͕SaKKy܅.) &עo$q0P\I!/t%@fl{f0>,X,ƚزGx߷ؒ|i[vcլf_>J!PhV0^=~={M[[NYcBP;Z_jq;#n>v=]GMU|)ݧ>d#MkBǣ ɲg߮ZޯDolTԙ5ߝ[]Mգ9(S犝!tJ#Osϴz>\̇Ļ0l%;f'ewygѷ:@χ>X~&'KF[Gk[257cf&XggmήoU[]=V>k4RY]Ԇ-OpjD|KŢ%;Vo1_'#pۿq?|~-U~?~xdz߸u5lF['?CU:[4me˧^vm5-yMS;c<[{o& k񪘹_N&{ Q^}f5[RTF}{C  XW^}tMeY;w+ѼvTiDz҅Jd1SVFPq!hYR~p;ⰲyGyb6K΁,6d(ޢBsa&FA'2@R숢* ;Mv6ɫ٘x!L^wYy#6]'aҞ2ԯVl^Td4z47@GUe ߾Z(K޾|]1C?^\_+v/qؿýuQ<{I=ͻ9E/J![^I8' loOvyf˼>':~ |* ]^z>g碓O﫠8;M`fۆ;}7_oog~FW ^hUߙN'} hQ|~X6{g.Q'mh7v"j@rWwW/~]._<ِz7$bgyi1Bg]]rƚ`k9i HQt!@I֓Sѹ0'tJ4b2ΣVLSd}¤$$Ȣy!Kp9$=;UBj5fR`a9C) {'⑫.?^-rS}G/8_899sz 1|p?HU6i}Q5Nac:ucKWdTz[Ͼs=hC_<^3^@q[N7 D{p]lDHD#=֒%i8 ?~>ٝѾM}LJЅ `[QH,جr;؃Q1:X%mV^Wr ؙr+IS\z+S2*CuӔ >HNB! "蚉s˔iߎ7(Dw ̽u0-[Q5wל?~QîQѥlBARɦ Ye ZY-5#?ѣsY$(h-Att16-(Z$vJ@ AdWVSĹYY/@.wܳկO֩\S5~Y>g|aHN ߺ oD3e vtg+i'my"4SnfDO%wA D"O9!xS ZjKr-%J۽rPHk_M#Wc2d"gU(+Hf w9񆒷|tb1rʳZr@X7hVhYRA#C}.CMҔ~1ghWba,k\lZ,Nuwpy4g^ V8$և2Z$T -xr*P f!E۠1^ Jc`Xgu:X FQ5"1ї yб ұ)-H ھ'е8xj+IY!Dh_jF>ٖ*U:t`ly>fȇ !з&:Do!}* ]IM{e* CE:X[j$r)/|H:JrqQ)6&2K$ki[7fXm!;+ ;_eɑw: /8S]ȡFnI*zubB挲,UYET f)bf-+^zXVYrA@H #!kLAᡩ`9E!y Lm|R U @GyNW#B D>8H 5zqjwU5OoJL1@*H %‚7bry)à-U襌~jNIè``4V"vK̀* eZ͠w?i1H2S`v0ں#܋!lbE(A&ųCݤ (2fc⅐gmiUJsQ0`D1xԬZUc#S|U7."²Ulug|͆]/4VUmPlfhu;Z$ƫ)ѐ1EYF $hԣz7E;jGl{ ĊƓoėt%@JHM&DۍކgC^湐Wit2p28dqFwynٺhw4 5캤.:CtN E i .ېl E6"(fBqfOLP`! FE7E;x*"Zč(c^+YPPDbOZޏVTf숲Mы]S})[M|t/{l }<.pU-;"8HFxY6R'J:T$ Dy _>MXwB+1x,>s(v(m )0J(bm;b+8c,*,2EG1FĹ]BM}n#TS2i%5,65zAj}dտ޻[PGK琎gL8䎬I.ZP rNf)!$A(3(]\!2P],Za0B6AlszطQ4PQ5{\κxk">p? 7E2 ALp:[_'n]1~|7@^)% .$urDS'b&& PNo q6u Jb$YH`T(蝬) =V2d :u{" n(ƾ)HS] g$9"uJ!jtcL[8^k7 mŏ_&x~OA([?J6F&[v6Jc51&bnKCFDtI:րl ?[Y?ܯP"J╶:Ԓ{vk+j3qnQy2~N Slܩ<*%=v̧u >xmm \\VC{MJeU9ǬP,0xR J!U!Rk\%@l ,GIeXSvi`m4PHĹŎ#ͰJ3[ldƶPAmDI*\np=՝[4H@HJG` csDO3)jw4 6:vΣ3x%̾o'B,D2nERz87[?g\MF]il`{m ENWGvN(Je8&8]2Z1~>*«/< #iCfd( {Ѩɳ$)RGQ|R.6yhͯLaԯal|"qz&v\)2J>zųT4cZNhl6"" ɻ" pްoLhRgoTv&Pm=ie/ wFl&-jv_ub8ʵIfv5n]vX*UjZw IBLVea_]{lt`N8e-ϑ*eMNuק:.(l[$ ُOXAXjT5?5gNIDg0.,:m6(|l&IoIEv:Ax3:rѵCIpu ںNY1-w"c :RTX)U(Fz]{sI*3C\# ְ 7nrD=m rKH2Ƴ1[z- 0nUeUefefU2XG㰉ÒSaTxZ&R_p) Ngo}LV2Rw9\eS*z`fMWWjh =Ѹ7FoA MqٌQgv̔*9ɏٵ1og)-u*<g-Ĕs[^[gs;(V+\$E^b:P]jb+f)"~;8WXjjTO|ug?ۻO/O^z~{韯߿OO'>su~Cg oM߷#oAVmu M56G]Ox~YC^m|J{k)@~\|{^ݑK&gO:twѕu3? b~^2W7QRTIl6ܹ/Ub3C`Y\e:ۼjJ5Orn(1_&]oTG$l @ m#iJ Q(EqȃtK}cl{O <8(JG 5XQ+-#o%:*:rW[GRy4AKM ӌ>rlks~:m\jbR9c6([ARLHnp["eDv9='b chkgbtEAZJ#68R$#/g)Oc.*5Xbm Ez;(&5mWT6]SU>Q^#Mkyn`=uĩUDžΐE5 j`y ְ(NJ"v\t6Mm.b'gg:0')%q HI0ljd >I,R-t(joFI*N/Tj4tQQ)ıKU7b~a|5rԭk^])T!" 4XThBgI88"Qpn s GL,Hc>m쓤ٓE M'H{ R!d0 >K`H0‰@Q[gO.Pof**J!åh\JI*"f 67 )d4&jci-Q'i?e0k8nO60@j2L9R,edld,KNge﹟ i`!@@ph=xnCyupzsoy/OvB8Qly9+'QeGxN \iChv܈3e[IGe0QT!ǕUJkKb!"ZsQ4:ۭG r`;3Y+냉K 4l` Xy0 ffIk4:ָjeet6,a> _k"nLԉ h:*Uwkz l3] HXGǝF S 7[k ᑕjih5Z`Ũ3hQ9b0XJBԀPE>hzr,-"0hĥv ۽Ǩ!cFY viU"bK(F[ :aFhNAc1i_Y}͢xRO.̍ձxwC5 Z!2xLe1{aUD]tg9&j!uP5atQX^ \fCgi(~2 znvhYs38j7|cNzĪb{ c5?˗XήLWfu nUZ5ik2zؚؽ`LtB*Xȓ-~谕&{cj֫c0_ҨYߦv,o>#'[ 1Rϻ9Y”bW ~VY;{Z;w^e-;t.\; |om6ӬS&m2@IF-qAaE(s;W],_'Ы`ۛ ]`jd&{@7j00O^,B?iꪷg.KzVZG7kz%C(8t kttkl:w{>Z,1 _PFj*MhKT=*% Zlۘ>a-M+naIsNqS|T=^c( Fr4ђNfK~Zf;倛ͧtN^VM:yDVxa=x|T6n҃),%U!H+,P5F0ce< Q3!S{3NX5+ O#墓 ͍+F)L:d@;a2(:o;cKIDk1hFs+%;a0S͏!KӤLXSts-L{4Z}+݂*GU[ @[ru <l3];5M׻pq٦Cq9D$m.`I9=Xs.Y:@q1]Z^rz{u0hMwVu,7sx3yV˳7^vlܛ!ۛ<]}qs6&c_Bx& b.!Ji Q11xduI61J"c,Axfb8(P al%a#  8(U Cqo{BhydT X{)) nȝbh$#-+ia=hs!-XU45rӕ*.O5O%x])x@逈xf+e9DiKɀ+GEا^,ؗ yKE@I[Y,9/ԒuteK ?HnKV:U4wIS皛 <'z&gA ybn4f0ʡHq^mċHg PlL142J ̂w'3bX*8:u һ%E/|`N-&bv_1qd>DVv~/v$oL|idpAR64VX`ឹT|%X=ؚsgX^k+o𦔠nUnPWo0Mmt' M&ݍD젾~|p繅àlҶbUG!ᚺvGM"/{W\|ă}g^2CGs_>_nG"woN<,-Pb!)!(a26fS jetEyx {~3-<_ vx|x#}KD\8\lebJ=JVnԢ0w1R1^^@o꬝:}"MYw17Ehn;ލ_p>npt>Q _W}o"Wu;bsuMwo~I.IJ$9oٿ5OO7VI^ -m6HK ii/N6HK`X -mX -m6HK ҂ [=+ngۙfnҡvf&-it͟J ^ȿΒ`q07 ֲ/װKfWkN7TyN "&6"UV&弢D~yǫZ+iA{f70/cF칚j<Ɩ[TqRF6zFV3nG M6ᘔL3?M=T g)2Z֤VZ T*eZHȔCb[*'?͜} ָ=8pzvc?T] PgwmB֏R%?0w?N`6{,{(e5a0ٲ̟>bTP(F_)B+'I/y>4&GtKVHN>OadYvywd+4+},ޥQչۿX_Z0ٲ/`R"%nr3ky{&fLcw.Xx38fg c㋺k#KJj%gbah gD᣾QINܲSqƌu{ DZxrUxqR: c=& RaUs]bcpρ`և{b Bq| p2 QvDjǝv 6Sd*B1(xR-?nN`GTNլlscɒ4ۊ6'1=\OdgJ'pߨ#$s&ShZNykGZGwHc"gLHWMb95at #TO!I`)s 9NѺPj T4dE^Ԙ*Ɛ&4sr^XZ1K,1LuXud!8dGoBXK-#o' L#Lȗ j)4bAn1 x-&WhB 8/up4vR*W9PP {2. ǚڈn| ZE )h0 FrbP@ni%'-FPF%D:SBtDZ~nk(&N Ƀ ŘQJE!&$_ 0vaVz}qfCUD="?]@łtI@-D&2x :p4HM:V%D%iKT2c*$$E ]!A%J$Bd"UBu(|8j 32]6"h^= E21|ePV{:WAq;2C[^X8+j`4s|qǦm]֔$Sƺb|?ܮu4;Af2G)jgV\A%V !i,{Ʈ+K; {>^̪s  L%_1׋ܡk@)@T2;{8gg\@n5Ŝh Dx X`exeN;$3rF`X6B ̓a:`X%XcqXdי$@fj&kԌ4eP?zD&@Q`WZuc uv> 4xmQ:8evMggf1kn֬nH  pKvzdAIIrcs[޸2 ֫cTq`Pڀ@&g ZMp9 X&_!-89Ezs!~~Z)H`'5>T2'Y'nldlVNRID|Y(c1Hh69AW\ER!>OC$;)`fD LBD6T~OW<<򎪄D()H0`j?.nzп|n%DkZQ/u^W?k`^m9[7םc8HEծ<}SQX?;gfyU]'rFj*A56o?b: h>o6?V?J&\]Xeu +ԕJKs^n_\}AE{ J ZBwHwjѮql(d'I 55-BUZ>f g3gϳ7#moɑǿ 72V??,{Nr ,0QZ"$e[ w!)SL$ʪv֌fjzzŶ^&]_^.2Nom=.<c,&jK/f/=\TYۤ;+1riz]h]w0oyo 7UIY) W8eckޫmW/y]-*5-r0\oytwUEߣCpvV|f|-lu2_|[gHlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6bs^"/Gl0 1bsVԆQl|QvQX4ygk0k?BrS|9w)#d?$8r=:;"; 9FHc@ߵ}ZQ.Wa &jm?oG9.0Az{%Qh_|#EPt/5(a9אy{n`*!SUfMjm< j3svriT7,L8uƗ}nm..󞮟Y2:lsXJhގ/.!;?u6@h67tw^fw蓳>9xűWŒc<]v[sA/+ag>KȔ IY EF{mF9Y ZpTr m:-f7*|aq/}$_"|F'MlNiv>:|:|; 2 KZ!,)dèlL)OaQ| )D- 0 d۔&ʫl϶BnǜB8Yzʜ;Gy%<ݬwUckSǻ4n8=;GŔXp_ ٠]9#J _u*M?ؐ < 2d ͠X5dD.GBUJҹbChIcLc<جuc#Ga`P3/h0sQX:00 u,+qηQ7- =b3sv{Og^u8:Ci Ezj!k rbaXHE5YU$~Xaq?=> Z=s5E|?f9=o{l#p=D8A':qЉNt8A':qЉNt8A':qЉNt8A':qЉNt8A':qЉNt8A':qЉj9Q^ vp\^ ;8诐a9'3n[:g9!Dt SI.;ϢX7!I8f8 e3;3 VYh&Ed2YUp$XL/A:k%j { y >h;dܞ`~V| ݂;W^Uh5SpSMdɂ}f90S%N7SLB\ΐ 3S\\x+-2 Ǵ ga{vLi`g' 2ʥ4*d,)Ȏ< typ53gq֏߾aw鶎 b,3vT*P`XRM%Cܘpk\QLSK?h9>~+U"'o/ay& pOVQX)JS\B@ BfVSDy=RD,0@jr0\jsJ4 r:!LNsdrMǴ̮. جOpU QWr Ad$U-2ULͥji[ۮ&}4ՊuCk"ī)QrsM.rQP#ix7Q5lCqjk 盱pJן-GϤ\IH]nH 3agv%,ag ;KYv%,ag ;KYv%,ag ;KYv%,ag ;KYv%,ag ;KYv%,ag ;zC^sky[MW`+N| Q\+/+Pr\_ %)`1D}\oE];睬iqۯITB 3S%1x{tj٭4={RKFʮ[rqtdZaYHILijʖedщRD;2$ڬTS-2rm(-6773g$|"x29M'zz3('j)D`w?bLKq2a_TBJ΂Y[{rcᅚ?V_ҸWnbW)Qj@ 2cm2x0v![0Le oon9IiߝA4kQ( WS.*#GC!?whdpv6Ԣ;|7ߍfb4*IisXפrE1jJ^eV*x^dYcVq-wVּ1mbsri:o*}<+/_a2>Ͻe}:+ É 7߆F}]W?7?>ƟF`h75PO`[z g^}Sw~qb:\>W6ǻ[atB鼫Ͷ O3z0iz'wtmJo0Ȭ3V`Hz̾G B|*q<\8y8q΅[UK48Ϡ|NjiF]* H-ٽVwۛQ'^mɆr{L:_WN)S*)VWܑTQadkxVD`tڴc=7a< v< :i^1 gO%XL &T)j}JlJT}0/INYzhjzi/\&x:# jjbe4)d! .;UYRtH!!SPw!']S*:!lʹL9bʃ!חZYthyMѿVV~# A@ՍOҢ2)IQk?1xRO w0{/5,k._]uAS@>te/Cpdj!E&8㒉7)a io3poʹǖw$b~}!1Vb(DJzbnw{Ee4C97nZoK|WN>eɢez':O?Oߤ΂[.ќ\}(g0/Ts}7Lqy /Sc~Wޮg/̍1pTχgKۮ,@Wx5ߞM㞗c$6Iwt6Jojg b.qq&ί =rp ˇt׍ϊؒNq-,\n$᥯WO>1vQ}ņA^Aw????½_o0Z`V~d63@?{?~ŭ][Nqk[>j{X 0̭ǫߏӄįˬ9 Fk ̆a~1W6Do_Xe_Z#]ޅ8! 6}E^Aqkt;uZj<`v>8)t ]¼XM+l"p<CvV dEbNkQaz,WQyPb el4NjQa̜q=ѷnց ӴIɅkE}&ɑg%E?=hfˆ?5rߺrfgsV0O6~PUq,CMTDԱ U5~)ҒUm2&hۥۍ乮?}\Kp\5BW]h_aJfko`OYݴ؋_h3Dh 9EXn+(ܖY,f?~MJt-';dLWllQYg•#Ǖc˻y˖W+qXQ&SU,Ix$KdТx@9gz8_!>@pbAڊiJ%;bVMYDQ#ihgHt>]Udz>‹/7UWr- d&W$ZT%? 2WN*`DJŘ>7)Ȑ<oJ5dr1\R*}2p֥ǚd I bIEm]LjH&bthLn!Xrl z,9TuBOi~56+P,Ę tԹ&XMN*KZ9=0{[sȣ ʲwl w,oVuNgZDw?@ ^6nxFkygZѓ JOH{(_ Cf^ \!)J p0p̅aWZ.&W\ZujKуÌ6 0AW,!LLLל[o饊nϳMXѶ..ן JpnW?-1d c߽1ywZH[ p򱬬{o5yEO[y[[$OgɛNn`bة*GTd?6}H}TDܓ)b0|$O%g4WoAE:$6Aھnӝ|(YUcSDTY'T5ytΥ1Z*ܾ7s.LgOtws7z[r=QbDIh$8+VL,W'%`3=n}J=S=ch\{lz1'&<lNByw:;Ϲ+\"#QcTG \T\3)()ꤕƠUCjՂ[t]r:<<7>:^2q7?SDlGP'L;ݒϻ_{`r SnW>v\hY#N8봃c*7֩(43E]M\'"P6 {!:D5=RBo(7:@1 m `U9{[jPcF4=ٱ[쭐=$I,}g?E}4/5s1eΚ( h`m(pEqkիʛF6:3MdB/`Gq,^! =Y!DщE%]2BsչibCys7.&tvKơ boc_D=#qDF.{T~[ly#\ sByze A;`y>$}dJRHYs!2(HzZ|:IcraXd:,`9%/A@y !GSYHfr(#pr~q:m^ۆWmQ.4Zu"Va]j3GDŽ%rTJVY+I4?;yЃQm|g*߅hL:V΄֧@* _,h)zQ\,9NJ wÿEЋyXSnOƑS)UN3QVۥ[3QD\%$&ϔlBD@qjb )j.;"a (S:aJ<y2gvQG`VeV`NK\5>jS5j`Pyv.3J&̶T!wȗ%Ԗ5CUQh:A*B껙ioNT]q3shU|+u8/o)Nq?xZxQ]Ju%+*!ZjVu y{qK-W&,rG2SI\c>!4 ŰޕD:堭1N BgE|V\);*0wF14749⾁jW]{4kLGl1J4zy#f12}iVwwpY-]&;-<:F Ugv:LNo=j3dgsGq|  {!\8yL( 2X.5-V 29[u]JDܸU\pH M&ffr}${3gGٿ݋mqr=B&,ˁgeC1WTTb d&rwTT,_6.M/Mi[N: 'g֩U:589u-'[<$(A,a}q1d-\[lLrg/*eLLʍVxZ TxN&Ak_K?9؎+AɟO };947ma>ϫK WGk.U//#Mɭy̥(Luy۫#p EQˇ8|4?pTLJz`[:>_c/OZ5WuMk5m&6\#J -|QicpR} H-Sa]I\W_a/$r+5]͆Q!u?[4ԑ6ē$|`-$&"eRDI(D&[ZK@XhY NSPB)$Dzs*׽K~VWGYEq $2+je ⍱PG%QN+:JښNrs :lU΋$;vu_> vW؆/4<03mCoՙfII2@I20 :뎻:U'zWPOb5D0 5EYgaɐ*I,RlvE !I%iㅑJ@\CP q_D Lp72kP2>O'#w-G|H,aWhBgT'1,(E3qS8bb K;_.zO1TDt).zJAQX9~A$yKDΨ/ ԇy#2\*JA XEdJZ qQ25 D4Vn˃U=fkHRFZ-CiiP#ƌiAyvzs݂wۍA9́P~J*L|M\`dThOy*},ǙA*f P%CTO._Ȱka=SbNxtiܙ8Ep&H-YYx?1VYE 6 Z ;6`#*5"hmg}x6,b|vkKVYfC5,bEr_?xՂ㇪2W"SXkv+5W9^sHpJa$C~#墓v+F)CAnT SXm*蝱J$"﵌FMPHVHKD˸5vjlbv" agk8Wo*ӛ&Ki&\{Hpsғ"Y[V%2} M65 &~Q `0^9Πu>:+Z&9!l'͜ ̠eVdwM#ʫ]P^k{=$寞^tQg5&k4f׍$W.uuSOX}m;@K~մUkM2lc9uw1^woCw{tB:/3]gSw-[d:k[^w u^a5cպ)qu>#?ihoiGLfkߋZKMH#rIa%58R*rLS,E*E` t/yZ< 7JjA KaS Hƌ GU|VJ)2z1ޖpGE.&iQA|4،_V꽇Y܊򌣃Cf0^/<+LK [rҍeQ3\3#0bl9k3eVƹ_0}iiWSU ~n,ޣMYo ~M,_gIݐ䮕XC"ތvꫣQty-]m.&KFq9[.,Z]m> m43s}bJ]lguOfcN"S̢d6b'cw̅F|.4-#p Oj3iaw< {.8b:J%1GjU) {׊K+ūS[[ތ:A K[ąUgr<%3LNOW x˫wܗ,Hy\cb6QY$d"iY"$&a!s@(fYzrt7j-rAAE)vnTy rBj8_/wLm5ay!mL4%yPM% xTXH_fT QH'2<{ES.)AFU9MT*}DJ"-"0hĥv Uhre1>EĖPƽuH ÌМ/aVi.ݖLliƳy/x5OԳ"[i-q I6LcALW_ =dlJsIv j ХOv 2g$Zv%"v]\%.+W\2)|bXW]y6{]WK܋(@g#꺶:.*q)y/^5tfp[p)xT\.,7>r;]1ἴőo9}?ۇfEllwxq!_(P)V>|L!z((ʗ`nN-Ͱi+ގ,pL'> pk33 |2 @WeEǸήqh`V'p9Y 'Laу% m>~qJqͿSYp"L__?{C`RGQJmƌ6>f)^D G-YB؋:'cV,]+{Y^S,`3,:2޲$˧KKn~RXsZ&=k=ª\⒪^{Z>2;yIG8~L?fPwot _̨ǯL2D˾ͧ#Wu4%rq7 i97.YEzrEf]8V<K∯Ç0&zyGz}^GGx>+B z}^Gz}^Gz=^muzs;Ǘŏ,^-aSJIUft#Rd 9bX{z*;Sq`*Lŝ ' !"pGE'$+F)CA[2(:o;cƌL"^ˈiDk45[!--i]c@Es;+ D}&5GLxހEIDYb̈́cvnNz2]4˲[toɍLd˦&S0`0 *Ua:+ZXY|Tdܮ5(QrzM&;UOq)}5?~c{&LkU7N\imԹgiVï&Ysf{uiWxDnz%j!)[n8TJނn6Y 0X-/ b1DHGiΡvJZ! j3wOYrR}9;&ƬpB0WƾՓ[[m e6аtw-$ht d22` $Cd2EI:S*2 Im\ q//>Rox?drq& F3Zb uVP<8P y)dm*_ 8?{ 56ytk?yIHk !R &Hf}$+uIېfOB+m_[?]夰[|UxM٭$ϕ*??wTةzf]|fF\K(٫hisFY|ƪ,R óY ч 4\j3 獱>QgQve$ u")IeT-8D>6ճ(Xڑ!y"JmjG/T1*( мb(@DZĠY3qvԳ^>5[p5 tijmLԑH URfވA%5矜E #4=~T |x);3C!2sR0hr8Xj:QI;BfBVO2PfT N줍_dHfں x5HĜX2Lū`p4k6'ڻ~fQNJ)6\Pu!2AŚn13C]OٚmkagX~CQfSd_+ۅŖji6ЌL\=QQ|:<2 m $HcC6-(" W;Bcߢ|^V&?e(ҭ(=a'뀄!乂Mԗ&d_u]#N2oFA8$Uo/2ë8yGQRN O'gaz?0iY'tT˼e-X/zɜ,YE{=Wkl~>be4˫?;9QY_/QQݦSl?wfZ>c:mhKGņ]*ߵz~1Zz ;>{2Y_`GGgal2_kǿ4I|`L<nߖrApL˓O: aOp.8Dgl-ɳS/ֵUiK:T$j(:FyWK]}}[t4j;ywLJ4aKn:;>9[3☟ :3w m4\tgH]IuL JEZV}x*ڪ'jHaȪmD.3 !qQ@Ld-H[+6(&_|HWLqXT-J_4NҚ5~bt!~ i-y_Ϋ= ᧒ܾ癨rNw!u>&fg5#;SA0#k Vj9PG,O8>ϝ⽗,Y ,a2B̑M0p˘ thM2z, d!!5{@@eewT&wu7u5v!N<^@1bjeft:Kh2v/dW?n@eDŽulC?01\叾T~^vgoHSk(\5$[@g+ڋ%I$^it佸)LTyL'88Tfdqtobvq6=%NLW_<'}Q>xoo~x|=L~~k 3h;t_Y{XFaLD F$ECJ Y2lp|;ꆱ握Β_r.-!yK)B@`9JH*Gz=e'&LWif,bݛ8n-+3u<;/<0b<'tq<1>/^8b'ZxaZE$LHA 0&;GT.9S2VXc^d0Щ^¦vJL \A*=h%|<n6;Em̀3BENW"XQ"(ːq 8]2Z1Lb+j:_{fYiBff( [Ѩ3d)@đ1)QM>.@L7͏}6FD;  wWBHl R`|Wyi5+`р,XDM y"b)mcFP:{Jřk-ie^T McDl&_ub\[,]c\t.x׌g[Ț\TT{ DHDB*դ BY-ϪlEx(xlv싇1YA lStmK'N5rZ ]Q; ϔ4U͎Q3^,e}LHjxJPZ! ?8STĈŇf6,.@%-,~1aizi'e$5\ZN1@ng̼NcV} Y]qV$EjH{9%EKQ2#D(#&XXyg;;ŕ-UOvx4 e+aZq^WoFM@P1++OTa~]}/?>w5>{~)Bux(ߌ؊Mr3~ya$1P2BI!PGg-<(tGC: ޵q$2nH}ȃnHqU-T}14 iV VoM$z||=<_$)2 (}>N`i릦l(7-Wɫb&QC4u1ikkmeUEpJ̈6_zU.&kK|m $jϷIWoL5[~g׍t6 ia=lDA-`*|P =ٻ7MJQlu!9ulaGE3\ &r,uJ7u*'~t֫& ^;ן_{Ïҿ\_x&^ : QHmD/O#?CϏZu547*w9fwu9qn粯Vh{nvϷ_T%c}.[&hŽ|_a_O .Fk Uk:_ wkCJߥ[9nԑģ$|דI^ PM>P 6RGIzs"wϟhb;vUQxDQrG>ɭhĊZi1xc,=QIT^gƷÓDO[kYv\kY-c۝_xeɗg ʮ;q SƸ&W2ƉZOq\,c5n̺-AɕJ/DZ Cix`,(U2xi23Ń|q3Q1A s'b<Ď`΄t:mXʰ5#"fH Y@SJx-Tl׹ʺ"gZQu~l( k%oS7M/y_:1+s]*slGt )Б X~& $.ø5$( R[2`$TiDc2# # Xjgش37*njukiiS(k4@{ܒSd< Y}^bzéPchN;8|gEmXZ wWm~n~>VoNw1NWO;raR })<`0Jb}Xjԅ`L !Vk֢%Wu^zqsGcp.gUi"Vښa}~ _wufi!~zO?zju∶$΀'?'V \#a1h{) )VNQڔ'3~WYB)_%4U߽*נf3jT_O!\ŋ&ʠ *a }6,FzrSJ:~AY~vhe;5v7۹wZl>HkBe?8˝Nuۇ?j:B7/ClW)_~ hyNx&]Pa*3oyj1(/Ғ!nJ+%)i Hj"u:囨ՠBp1A FF(SfQ9d* Bс("cEVJb ԖiCP qR1i+n'ٮCLus' ύ-"KUI Lb#"s GL,ѫ&)uMBS)zvD"N&OpSjpX9 JHDΨ'gG\PW3* p*RZbɆqQ25 D4V$>'{\4ag8uk> XYw0mt*ʑRVcƴNh%粔8:^{ m͒<7es̹Ba;6foAbsqW[yڻϽ ׃U[Fe/KO/4HR9u*x)[>tjj|BՄqWՄf%(*o5I%1Qx(ӶS\LNSJ^@r>7D"G%YM(I ^w빫!cFY c*h%{gqoR0#4'@cogl6rld?z8P Y75~>Iiљ_]2>Xkǟn ve3 &#U]n|򣌗]RRU"]ɈXc3V SU"]N޵:ᄕ\#!0@1$CHd Ǩa =|PNa x$ )[: `ZFL&Z ivFΎE5U(jf3|+m2u~3L[zC -h~4e"vkL@gB1޽p\ ֺͳH(uB94 m:Q΀ bx:E Kh]?.'7wYt:kYeլ;%,nݽMy7`vyf~HAMn%sޅU-fd-7-swVm]kN?mk|QKα+<.>JzJ:5/ׂu#Rs-:[w}G:ԍ Kk&/ <)vчHS'S7y ɯA e,nmCEuTȂ@5J#T;Av;2=">ND3QL(K-քq8یڇ{u8Ra'ڦQZ>}>=_M At(YJ{Eu%a`tDؤ\(epxa,`8` cK- +G 6-Wy8E&ܣs2?{C q|4B)H)RRasR"!,6xGI4 G: 6KsnKxvMRc-km#9 /2R?_Þws^$ bS+_)TKc253驪Uu=c$똌bǴPs!wuL腶`!!soGqL:JPҿpf N":2+'V$4uA,} 1PtWzU^O/P\q5޸:f;t9;o~sBa".켮soJ%n&jy2ǂq22Nj,4^q2m r} ڟIWWK^*̷~"T͍l2|v%uBbogS|q~Vʧv 8*h}&h)LAh&Yd gtWGdW\ҁ6)I &s$ty,.O:$叫-Y:wU?V7,s {'_@cjWetq2Xy{zdPG4_,<S_hI ۽J 7׫OP('gJ͂[[IfQCQ3_fhExLKswd3)-Wzgz?R.cVJhOB1)!ڛ +ШK cZKKKQȚ^t9k33L^lA[]@cNR8 xm$k7BF6sѡq#.-o 3Qe:nȮWij-.O?f{ꉕSzҮoTvqC/6'#,&h*ȳIH< ن ͜9Y/$s$CP$>^&Bg>(r,rm%+Ǩf|U90B 'FJi[DEc,0-Z;YYΪr+Ֆ=ZAv^*)(,a菘7F'M j-dC/5E෯z=ω@fVF#T$B5+'= ?wGsVޤsO+ZM/81BfQR{,6lWI\̦zbM }=Uc}V>]Pl.$ %jqK*-4! *h(ًG=XuÝN}U-:A/fea~]Ҵ:/./% Hc^(6NhX/W 6R7m~~Fw  2m@/ o4g't*]:6q-bؙ}zT8ʣ FrA-1Y,@kvQK6 ->Qz-ć*%ϧC_L1^.*h}oۘIvusg!ćOR5chL;͍6F %d0 N[N}:»Do_i|֘H~%6~Ve#m{mo߼}C|2L|2= ?yi:`$Z[y@@S}!emm0ԑܤHkHd36PYAA;%m:Ofw2rΙ܋(0 ĹCl ]tFS teʬb*pot`$أ*(4tsCCw#p *2)ύ}95\c3@l=|)V_Pe"X b*r})u ^YPit_YShn]ƧsQ[K{1?_ FB0I!p2I;B\N(O`$ɦsF퐑l;a$3ns 8xQA%dE)9˹X8%UEn9jirFȂL`CLC,%g29qw;tɜl$W#gʷͽ*KV?oa3Fj`,r:VzDLAj/ q&A$u 0s:pM]DϊO3@!!*c9;L}-ñ^v*[ZT.WCO/'mdoxr[sc䃝v;g+v[U+J%i ` zI EUZ9Lqq;I]]?&YB"Vz0hH%ZzFTmX5c=RMV/t won6(xK dp^r_\hq4ͿrLJf"(bU) \p5 S!k5 ;tGi(ƞ'e CMM@BduYCp:G,[{g9k0k&fWv5WkCe ѫYgO9j (&"X- n,yό0+mJ#Ł,dBȊ 隄,jFMt!eX$]_9֨_e%X?ՈFTF5yL0XpqF*jIXhF͈4*"Z,F`tid]ֆd@gB Fɒ`7*kjz٪t|ոd_+E^/d )HYxKGi q[Hd5 [ 4Id eB/vEV},*llgSoOampAp} kKRռzJ4毕z J™(ClѬlDb6zRFߓ  l&鲷\*"\ (UT<=QR{g)t e \$6&B BPMʅdɑYq-rvR\ku,ny-7+ ^ТyHĒshiѨ옰,f-\FF$桰ZikN8[$Q&Id@Eg $= j}p2m'\ylǸq) d4ꖰxm]c%NO[|v~i&HKCږ&ޅ c0kk%FIpXO]tu<]' TeZL ^p&í &L$ ;4J#lSe_S]Wxݝ/#㭇.<`U2;[㞾]]l:<+ȍ6L5o&*4Yߦe4,ˌd2N)iyĨsH.z M!-ÃAH1{e@>\&"lur*pC(jE8Nەm{O]9C;3uߧ8kIx_Ƀc^iGQ1M1 -z$NpLȸ)eh;M)49E`zdd^ZŸ]p4ߍ꾫OE; >7KIr9UpZGIY)siEHغgߦIn }d۫_?'[=߃7x<~8=:U\㯣Ѵrڹk{{m][hom^iYi{2E;pʡeacUkN}RuPJ56!7+m О,Mn=m lsƪeIi}%۴.*6-q37s˭#)FXTǺ뺯XGOycZ&ĵ ).PHb`1@1b" NV^*Îkm˵k3Y͋^s5YKkCןӔAt<_:~(Ej Q'zZ0_? ۤ%0ÐK=:c./=-T~rMΔ'64$46uT6} TIQT>{<)ʽ%pt̙1K=*U`J# be}00+Cʘ̹ɡlFѡQƗtV6S4-Ot^s,_Z_ ->g2DOgLTHJ8(>%:m@wƑ6X\ۓ,o*հ| fF@^nO=K)aG;iL?Z U%VGS;KwCꝤ\ٮS՛c 9"JK~4pUXc,薆Vi 7?|׽:.˓u$[HjA9 ldcjUbOv:BUZ9=(4T8$B^@ t/+l|FJ0 ]׸ L0玗/N^wv꿅񩃮(b} s(שrr̠*7"fLpx4<`@SFeB+;wR{Wj` w1E/rgR(8E6׽Nca*{Pjbi<|c1TY%ŘaSNySY7~yW댗;f\ֆTOs8+#Ahh@藒[( E % wQ3)yB S.59w! yv" Φ:a^Ms*¥"HsBXh%W֙B:{} l UŨ)9c*Ҟ.*a'>2 {{o`7>~B,,Ʃ(=d s儲:+'CC T/U/|RubWt m7,iZ,, ri*ρaќ!nrHNC^<͖y}Bp쮭di^xwּ氕)Bp1A 4 HxAa%@I,R1K$Qޔ(T+6^sR[ERc "0WOpD-qV:x4O0TR}^!Z['g50 <pMC98LꡚB ;Ǜj4HGV3 OL{ 98T 8pt>o+F9k6&6M_&kjMɭI/7n6:)V/Vvds L (gS5x(J B7[:M`/0@&.C_߫?RCWnK[t& UTh&I9(~(mZ<*[x~N-yR ՖAvhfF)ž R E|`Ab Y#5*zI]Oz#ۚn,좵Wv^u܍y6ZU]?N㎡hO%xX#[O| I5`"YGH(򹖜N)b1 3K@RS":qFRЏG}3I2k6mɀ1T)b+t@Hm'g xXefT>D BͷaN(/ݖ{i+" f_qfwgW_wf@9-ѳ<gKf}ü۔AmN @@F K@GRQ(KFO i zvW/g}ݤjk$%- xV(,lp&K/3g/.I冷;_>&|QPE,`s)WnyQpv慉zYQY\t)ۿ)j7_T}4d8-7^}*(of'#$ءN'-'П&Ίę\CۂuU19ӌ`{ϜMu茼=eN$]Ac`H WD8Ra$ZDDG$2)2S*LR;oygWGCJ9ōTEĖPƽuH ÌМ,nʐ79Zg,..nG`v5-Y^sݙ7և,u{uؔ%:GFTĝcu{ɗZR)ymycI.ˡM"% QDɹ1MYPm'PV'Ae*Oj@d2<׀&£h%7kML(źsQO"~ iɶiX ;3]ngEUl0#EpA 7{#JEױ9h.)S#' McEѠ9 QQjp TiX;w[rX-,&_{nu?;ȸ#/O{rC4̆o/p88hr*'#!r/81t M R2> fJ9m-)}knfssʍ>G(W,o+ @@γhkrDZ#8SPdEv~ׄ%,3%t$R8qֳbP ٢AӏKQ%B8,]1XwGcV qI Tߊ v_u~C UaϬ $โ@4Ȳ @H=*l'-w8 5 )ʥ1?R4lmrk\2a8;7E NU-&[m^]ɾyw#MiRNXBL:2aN`$1Df4!cǕ# IVnEv>:~31JDd[3o Mq$hː[68NY\ $) ejמ3W+JƟgܺOM|gi^(~(6BxM0Iz$em\H!w+LeE*(4$i6fZs&)K#R ,GP:i܍&NƻÔl#A>EކF 8_+%p58ŧ;e'XK8e՚kd,Whh<1j̣P,YQ` ^s,B% DEAZa\(G Q}"%[]BH[|/^OQ26b ?ZL }SAa,YBy9ģls=hClz槪eh\kv^V ݯPdռvyRgO/ t<4zB y DN/\KodΫӓ7ɩ%ߞU,/>fѹ8Vw~G.8=+myEU@ -eNj Vp6)Q<UTIn70.s)r}[2w06*;`CcU3.h#4;UunL>}mG.@vJ-Lҕ]}tE\^ŕ[9.+<\w?Âne2,="Ԋ4t>l=3}"fW?V /1(n6Y"ԟ>Y ǣ\6f>Zj8 p :ET'Jڣ/FMr u~. ϧyo (?0OYѮrV s!N֖ eV1'a"Q_J^~/d ceg'9Kn羫ݭ|-ev ð|Ҿo.L3EzgG(M_Gr7o3S'~-oCe·"{{1 ?g/6DOCy,ft-e}\rj TpDW4u1jʊ"<9)-h8Ikjڨt@yO 1)S A3@rYIׁ^ڙh$qTX*D&N1CA@ʈ 4#^X˸30)7.j/ϿF_Yi ry㩍h.10U磼WìlF!ВTMe*I(+fɤWgRLJU^>7X~(_P2G=]1_~ѫ/_}~~}#ыyqsu#0} 轻: сM_iSilo47b%-ߤ]Q;ڽ>Z^ھrnmw/矞4ƋEff& ]ΚȊtes=U\܂F5?m4*k1!1v@YU0>ZtM|Os$T"x\!!R>0}RerRdg} LHNfL<&o, Žgg(рDcB)Yk{sEΞG詼 #? rUB^Maw ،WoFHk'ph7xu ,L.MV+*o(dk~'3mDԞyX7{g{bZpky%uw彍'6vߨ5L >ey4 Eᅿ{J)Qa w{]ǯXϻ{0jm9ֆvDJ24!:DH.tѰ+4Yj(w,?ne R(n)iKFy_Zx)LU)C!sEc(O!@E1;V$d"YT)E.CN3(-]l ܛs7p'yl]9@waW߬_%d7ǿ=ٝq+P'i8#f_y ͺ٫&i /ƐHtJ,ѳjsVG=? )/ {¨lzp]x %uH두B;O} `kC 㾆q_øa0k5} 㾞y#'3󢒫SyQg^T*08t!3L' ta:SJq xaqisbґfyikp ){ !w eġu 5Tz? *M)K\ d}:]LP@I e& @& ( d ]rS\[ϻmxh53>zj=kl1 bZʙ'7W[ey3dU9ؐ}?ֿfz0WeG~W,/Gs\*۔vWNn+l/_uD= \/f?Ez;7n6\yq5k t!]p֥blyzJ)Au9F./>v—L}A .ܞ\8u. ~\8w.Dbt+vT{iR8iD^Ne 6!˲&B&(ǚ S$U9{[$,CC:i5#g.SCHqzo3vY,nVAKdUʘ5@R{4,}d#\uq(zUKP|"]g-'*DYH Ρ4Qhpu}+rT:o]cg4ôt(N.Z:wHM?/7jdzens<9X\lL559_t۟+ɦZr˾i!A F{mVBH2*av<՗K}(4t%ҁ-J.9]Dcv,wR6@`iPXI֌ȹ[3*ta3UBc].ܫ.\QTmvQ񆂂>fӢ3C&Ng\cGA+4C"$1DOeba ;ZmCu+:Um _<[- BOMITD{Vܭd\C͸cW6Z{@fӦS#p,YԮ٠ lb}}ᢴ#|VQS}X!1<2#CE' ˺&HFr0p:F"SM69wީ_Ux(uc8hۛI(ֱz>RޣjѾ(2MkS.[Ap E] H3 *î7*b $]& B{oJ8e.x#>Gy^Qnٻkki CBaɓ^ETƒC!Hdr)\&߬I8;8?w OF=+Q\]6Z;Y Rt:0AP*▶^j!yai'Z6S؝j56)YTteƢFB0 g8K,Fi2FQR&hD|$Kz)m1 :9[j=)Rپz$rCki4q>\t]4rCn5k#~ KD0ص9'YlUEQ).dnBދUVl:F9;fA "kBLI !} `IIEaCS9 B2#Zk*rd6]P∬3&ࡱ5#gK9g f=NNncT( RN&/PTeƍɫWlZv)AZ E~dG򁑓K ȫZʇ$O %a%0 PJOv' "vRF6>YgJyї) 39Aȃh8ivP2N#|\^nVl.JF]bBӌ% {lF\T m""LŔ:`,4c]5emPlSmtL@Dƫ( 9E/uԹ$"avcՐ*t?b6.]K,+8dҍ)(= (˄!qMP*r~i1dVZ ~Ab|pCckhP_ qgjƝywfa@$iKFyef8[V6I(Ɯ |Dc(Oض㦌!@E1;/< dv{Q'>سM3(m&;n"g13Ne$طl>;v7[LKߜj7itHH 7tOɘ1 N;V'4>fZC vaK퀌V=s%FZKX;բ0}=MPiS,RH)PʬFgʳU>˽4:/4xj1?Eǐ^FkA>jcxCP-8fj*xTٴqψU k_ka݁ %uH두7{>9%ZX_-vtNGĒP$6' P.?zGKI.転tm B]6V w@5 ø cg2]X ;Uhk+ֺqfmzmIO |/ dݥmmaHPz>?ׇLTg2oJi+@+ esC=Ѽ+o)f2t/{'ՄuŔG.<܏ZvFEkCuirS_3e*l%+^z;u=9z9GvG^`#GEE %6_NEx& ]z nwnxy "xg?ecQuB}Wh~-?*wyB-]Ls/0Iӓ(Qa7}f:g17>M.+RN'Y/Oݗ^r)Oޏ?PJfYQ(b>gSj ˡCUPGȘVxS=bCn1؍Iv}SpO8^o >ۯcJA^XT%TT[i6GPTu>1"g!(jc"ŶfbrFKEdkZ:% Jۊe$J)PN(Tc!nFBB]gu=aұ)Po㵝|Z:puEw5_Y}>ܶ2)ԦϾ+Kd?ɛ7qu_tSEe6OY=SujW?O]+՗ vm:wWNoUʗ/[5>+o][ޱ" [nQ6\i.w"WH+m}6%zmt+w{9Gn#QӊG(疜<c̨2Q+|gYgFr\?6~ߏդ /MgUjlM~~#G3ߎ|'/DͦjU^D̦FZ7_}7J ߋǷK&Ukvt m%g|‡dЫ վl*?u>oALFtʢ3V`Jq3``C%24YiJ:rE5iG3{$i5k?k9H*%2O .h2ľX˒ȭ6#g˘JPoUk?^etpFI{"r!Xd9Q%xHcX:mCxXᮘe]j$gaNR$dk9'YQ'ﬔ% U,AܗcP̸'4^eT찄;}ldl?Ǚ$n`$SH^l<&d.Q8 LV$ rIZ`HSʌB4)daTW!A N%;a*k r%%iKZh"z&$Tܙͦ[πF?ZA ,(^yAYq}S#57ЧQ)&}U%bHNU߆D_??GC -}Ȥ垼﯒Kn"m;2TQ`oBcjpHJ )#QHm~(qzꪣ ?i$Dtl(_F"2 <e8A9&x6c7O%P0 (+:mqlrj &qPS,3r|T%ٗ"ÃRRMSm lo 8+N(UQG٥15yp]8 pI%ȜO"G'R5 Zb^]U8<5rrǡx9=uuCauby чbR(4$vU^ v]vU3$=:@>\(,9)g=}%&h} lTԏ379ݏo?WW߿=D~{x+8u30B.laBᗇ!qV}u ]SŶ>GL6>#U -jQzߍJO.ڼC>N͕"3| v}5 *~+5]͆;d ]w*sgި#]&'Mz%r=@2O(D&[ZK>P)6RGIIOmphY:Z2@ <8(JKeAPFV^bED1$jůS5z/};}~+NĪ}TDqBwVf6yx LE4gJIr y+cvt\` OA FF(C0AGF8ɃE*[ϡa9$AziㅑJm1>P` й]kիd6H\v5k r\,u2gzE X 뙊.m$,p£NTyS 7pZk ᑕU6h5Z`Ũ3hQ9b0XJB]7tn/`&y̗>Ugk$NuC􊽺<>OU⼎ӟ]TkL㢏X'[W9L3+}ԝN..*#dB>= IvB(ا\RM$&*9F(EJˬ L J N*hX)8Hq;A vD>P, a[PՄaFhNi{@{Cv˻$r.J\G?qB]db?bٺWI^kٿ'Z}d>1U+n*7D)rf{1ktཧc= ' AC2D<;NU|wP=d&#QHYu2LwVqY2b=6MVHKDbjoly(@nw܎ SsM/n1`V] opYC(,OcULps9=+v)mm䍨I?"I͠b:uZкj׭oY]% <@hC˼4 )6䵖7CrVtkެKƼ O]bf-[ʻUF}C4M(`4w6ObLJzJ:TsʵugSziTu57}Nzԍ s4Y{X^S$MNwYS>uΖh~-JSjv0JqugHia&ul-A(Yچ(4먐fk*F"0v2wi)>1b0CiL7Pn3Z=NmU#Ϻ>ߟ.x& \:C,cdc:30:"lCbD)sG23=cxƀƖZVl0R[y8E LĹG)O_{C _ouz9smKKIޝnWB mVhq=dl:j?1[0^,Ci'$Nk$H*H$Qh 4O?g2=d={q)xD9i@aFXe uiM0,(Cr""0d`zyz]U#d@ 'l8&({;LuQ_OO | k7zaTٸ@fq69I9HmQ6ɁB1?KwDa~ߑ0bl9SSX'O҃˘)iw맭{E@o[ x xB ^n%59}Zϫq 6I:~I(߉gH%ț(6E|;]p]puWEͳAXUڢboze{F3xk&e^.T2B!:,L-3؂Gǟp&fCS86YZRf84&p| Nl 4Slb;9&T앯xU* VfUG椘[n6 9L08vTPjdJj*Y,KOh6f٤X-ϮPZ /}f1g X˳=1X(5EH1)B683i&1&0]b6Ozem?56Q1LUmL $#_'3JÆzk:\Yt]W4K;|sܤ "jxp~fpoG!z@8(! HN if"줞eѴR,|&0:wSWx)0gz >h@p l+᫣Ҕj8+?wuH|QW!i@/#|ZFrPq$x}U$Yx#(W*_G], ;ofuW{x@Z&fk!̧cO|jl@og/d-wөϛ.K|Qi6/IĔ>2 5r-.#%&Rs!E1A[]_m&kŖWC[w~-k߄D1fZmÃm#*}SX:#H@՚ Ƃ!B1,J.Xk#~|//HGhu;40;R=1Y#5*{ KI^b@כe7ڸdE[[vo%_z6wlQ.y^˸(ד)g`8EW:/+*c‹a/7Mvʫ_$Au.h8!<۴CItOxzM1gKI  [k5헵>lƋQv.T4VbV5B?duU,KγKWYPcv1Y,,kāy^1{{%<@rFϵ,wJasmmS;'i)Y'˭In3-1NcӇ5*Cz Yz\TkjXe;߻^árgtu5ߝVqIzW燭C+Ri(9K9pp_ 0w*!]K_C19/:tÄ+tv+~n`Ǭ[!sږwVZ4K=kK]rIPkcPކc.aU^| Ph4@@XkhBc&lu 0a)^ 5@ =V&5V!m׆-N{Xs=B``CNf*f~&u !IƧ7;<ǰ?4$/pODf337vMJV$<_h4dj8!f%#I>]͉6df%)8 E/Wz)m1  sb0:t0g\ }9.L/zIޮG+_*OM0cm=W)ҽB(/M /ҕlA Q C][sfZeYbAf)(bf;P$xC,>(29 JH2A$)PT-()i(lh*gA0V2^Hfdh2V+GzlU+:c"Y3r^|bJ)Qk'mDkdRN&/PMM7&5JF'G(mOT_"R~VH>0rR2.0pN"?qT>$}`( +Y{T}V3OH`f)#ASd<[KJfb XsQq4Q4Ҍ'v4r=J#= ?j]JQWrجЫP{ica#+7]Djiji+~_3U\csKŖ;h:΄ ODdܐSRGKrQ*Fh'k0֥)ՊCvX:=WÚpY?e]4x˥@n@YmH7;hj_]!w~~"٧hd%vN!")Wu}Sʖ2Уv=vKܸL.蒓*e+2 G*'9WOD B]ۨ[ui-;>b-Kj讕eSͅ/O*s~ c'a*A]=1w gBDae@ 1HJu,UFm"Kg[]}˻ct}ketP;ypc%U㏋=.Ly^Lkd&o:tDbc`U4IDW=WګqO82N|VEgkт3gDB Hzu& "iLS`ZKC(uz_P [;RQ1Z (Q3rF7А !m{o|lkEۃħA+㟻} B>9Sn[@6'!a-<4F5r:ִ4oO -{SNG ҔEApIw1)n:`M(LQ`eI(}ԧ[ ^N!oqb#|&b_A/t!_0xYv~uZdɓㅏ%S@EuuN;8: PD1I:Āʕd[} =k Ne vQbD9B<>TEmP %hjtcٌ-|ޫCy[mY~-Yܢu"hoJNcʘ5@R{4,}d#\uۢ=z%ҰHZmJ2F@sE6flst2~dzpz>Lf|k;zոucQ!#UwbZ17bRO+ް%񗲊T+? V(Q S)bN퍶Y !}ȨT/}q\"{B:+(V=\rLGQ@ٱKHCa%Z36#a4Ӆ8cW]u!ppMQYE6^7=W9?ty/Ntziz<_|;*@XI]@=0!R* ㈿бv< qcA̶o'/ =7%QɺKyXcǓYbEk7]668j޻0YL 9H*R>4:3M\ad*j66$GCfd(dYd(R.sY$vɆ5>6ǮQ7ֈzԈF?3 fqP*R5)9NkaEkbSXdߘ4X ٢*UC,ؓVh!Xv mE;GY/λp )ٌKvՋ^4^}Ͼ\TPn ƣ Q^BȲXͪE^܇^>lCXX t*l]SfC>6rRŃ_VGuC-6ic-kw?D^ۨ 0$ֈSt]TYtjX&ak}>D%^wr| NVѫj)+(-RLf$5`rv;:SZR"Z~M\p@ɤ&r1[/K~.@܊-w)逺in{nx,KvM,b9񞈜uYET.^('RXFf&b`maS̃_uKӳyWwlמ^aNR$dk1bdPGRZ ,d@W `A33Bfݞ8=N%^[{L3Ij! O$SH^nv2YZ(AkOȑ42M[,$ -咴N(q˙*3# Ѥ$\vQ]1Z(FAj,H;%;a*s҅tItJhKZh"z&$Tܙ݁E֡Eqe zY2MN{+@X>ϒ$OEAYfr6ѯ 8S_ߚ ke'Ta1${O}_C #}̥垼$<7f_ߙ׸ϼA Gqp.>|JM}<댠Y%(CbFm2@wr9Oqzۇ-gGo+lQɗҼYOB< }#T|1d9K<Z[^D^.>|lFma2/,f'X-lzݜ<MnW®G;d}zr欿ʤF停\ozQy;]yco_zÏʽ}o:XPbp&#@?[C럿bhjho548bh-[r·[qo՗whڶff/Tw흯Uri5+>YWl$瓴DO_oPeHrBc݀*ݪ~KOBSïPwuZ0XciDJ[FҸ3XPPP1΁,V);IOpX/C <]1gG "I%cT޻lh# T!*CXu&:r܌'.ǻ7nV[wg7-:bo}>Z{ylЄX:"EÞdAya@!H[2ʳZ6Lx(S B&PmC~D1;/< dHJlːL:J(%D[iE8Voҥo{OA>Oq57uOӼ[XxvӇL4#:z.tdt4LJRzXWjq=+hak/c|flBE&U ج/a]\H38rz ^/Mf\.dl߭=8SzAڟp]!u ͇;\v"7)&ӓ Y;M|^Xgo$Z{3j_<N>6+w[{7^oypn|`uNߦٶXD/C{?mlGyCzoHv~2ZV-;#bI}wߛKZY#$|u]QTPђQ7n3֋\mn%`@%if栭1NU$>c&RٱFު:b]m`ft;YRA l  [NUh3A=ECgݕ!}ej 3AQggV눩/bWۏ_n:re^#{)\I=.z]ޜr,kmH@_.ݑk'w@^{%OE2$e_̐")5mI4W!(F@))\QAkZc%\YgBb/4h*%㑅H>8Z&` Xy$RD{鍜T}2G{xb+Oj`ڿ7|*؟OiH Z]ƅՔhTQ!) k F[5K>1E4CR(M!xGIB5ὑs7/&=>fNܱSA[FwX_; lI]xݑ6Kev[L+Wj_7 y}OP</A%CܔVJR203AufסލTl15Cp1٠#P]N1(K+艠΂%"sAq!$ X`ImGc=X qR-1+n'g-7r:Tlx8pFG| H%p&DxFUp"QpYdNሉ%zӟk{[9$N1'\d{ R!d0>KX8P+: ECP<FKeVѸ8aUD&gln!4Wx vvآm-1>;;܁"!g4c{oxT^{y@FeVKO/4HR9u*YȂa`thO|@0>@)@WQ@DEr\Y孴F0)$FZ"< 5As@1VYE 6 Z ;6`#*$;A 0UKȹYOyl3>YCM vnWd,KQnĹ`1X9N}Ij/9v֖R2|%gS4agH݀t`{x 3S_L}ė+2#zኴlYz&JcDH*U%ʺI57Δz2^u:IDژiFKЗx}*\diO^yO0bʓ{E) #Ѫ&*5pQ!-"0hĥv cRNq,c*h%{gqoR0#4'@g^ot'!yüxB.F͚˻}~.B O2;OZ1{3zW7%)hmR*36oǕw%R8R) Pm_ ~ ?K57o $ҖZjO]Jz#F_7m|MznGNo\ aS>{ķf+o *F1Jd׊[mT7;-ŭ9:9zQĽ(Wfq` HTSjYjT҄VuWZupq8OJyZ |¶PN._ZB5+@ ް̀,|ώ7Yau^`8a|`N\Ӳfمw=̫> ~6:YrVv$Sjv d^ZкzԺ[s4eͪChYB˲Yݻ5zi|w獱ҡ畖C moqu杻yж%hwt8nbt;VKs]c[0"7m,zy*ڠ1[h\3GP$St"#L!$IVbuբg}(I{)E}[v{7D/!7nwlԂr89$SSτ$'FBe2J:VJhk2,/@v-hr N4V`K-c#ţc8P+Q*/iYGچC'i?? Y[8U2)KW1AH*DDiÐr31~[<O&)jhy 'P(.ogWhub6x7;N0`60݉,(@o[Da797ADX#ΟMDSZ5 VEM<:qa3`>3==!^ve+z]v5:Rjſ ֥宕7E[m_L/oWgxjZ:FP?{doCH־GoФgն@)Tj^afl>|[I/k+N<;Ňks߿_I[ula8i.u{c$5OQ&SbY+ m_ɪQnO=DْF,ؚ8Lj %2ExT,{k>Ds5>B`-/gɛ{_` *wq<=#zJLbFK)6"R&'/89m_'g/u^k|}B!9=L7Wiz[ofqٖ&g_tq"jW dp-~@@ޜ &Xc(8m }zax K|6<6J35 NY1LN~t<0=0?e&5ptaXLuL`ý"zJ԰Vz43jxu9>~5AO^!_!Dh0\|i8:H]gWW_xQ``ZFJPq$$d⛫i$?2o\ Ko*Iy۶S$e8Y m!VAa>8nKBzHN^-T'BdI_^ӴkX{&Ffmܑ޺t=bٓj&!sE2@6;/;6{he$&o|k_7ч;;oFna rx{R*GT| R,sG ]5DCbY`X爊~VgD*> Ҟ")z40;RO=1Y#5*Y" t]9 #_oƚguf%JYT~{+26 qL!g<x߀grPU3V~AX!q%7  mnOY܋k'Hry X6#.7bE8z=vj ϒj쳴YL+r[8%W4|D2x55K.~1A[Xc,3fL_pV5UGq;ޤ G,=}f RAj2HM 5&cQd, RAj`Ȃ!d R"2HM<&dɓAj2HM 5&dמAj2HM 9?'?69^ j[FmF7 dtn 2AF7 dtn 2AF71Üusa#Y[FH(򥖜N)b@bfxޒ5y{@kQw*nO3{gC9^d }h[cR;ĬW逰 Oj+;^s| *۪M74a=jm 2|s6_3t5@Ap>bnx E~7 r%]my3G-V}۩:ߑ}đT˼ Wf_{VݖrAbDN[JoUgQ0yT!n0O?TsL'ÑAkš#*6d%^z0q=aؐǰAaCh+OeV4Q$x}DJ"-"0hĥv Qhre1}sVA-m=3{9Zf棽%_ˇtZBb_N* ^}LS9 v?NkO\hp]U%&Ϧ ~ل|~ Ԩ3z'奷kS"|oA^O'>y^:-7?j#j՛:;Og.fF}(-k q-NLukzzi/FK%eߜ^ŏb-Mt ʥN+g;VtIѕUU#g;>;G!d3âd(IY$SJr)a0EШ%ADnʪQ (QzSKI"rH(YM.*ۘU7gǾ |LZ2a/QYavݥ‹E:|6$7: / e{V [n»zzqsY[pn;#䃢-jtEm{vgd1o*>\FNWw֦V.q4:7o~qZƕl:}~kYz3ޮsy-7΁iF n|~oj҇o~]s)X p\M6L%>T"(97ϓbt̃MIyKx7 P~.~~^ yz¸X,ͻQ/N:.ڝ)xc'|.uP'ģ@Bb 3cLA+,YE E)zJ 4MJJVZ mpC|޻[t(Òu*e @xlA(6ڑ+m-D ~yV;$HG6"3hrtimcCm&vC751,e|gi*4ӓ}*mfeׯqS5;lw38XFuHA `5҇u"1&BcL'1GlzD##bY/gwTd@퀚`ѐ "kbtid4H_VcBhf:t=yBd8_%\g3-/~~qiSޣzL?MnV.fӏׇm}o~\*BV&6ES1S( =he{V =$fa_|S:Wԥ KE]eoz= !tr/^G ބ脆P"e&SB$߸:>w&L!@;Pg0(A wn%+*JgbНN 2 97Ays1W0|ؐpS,yewfө 4vZ glJ΄ ;( ʬuUa ƔX0'rvKKZ֍C\D@>F/DR.u濩u>G3qv`z4eS6Ӊ2od?>q}GAx0J)&\*^q &4%~\15e0$Y`h#E BlmlY;*"(/}vqڀ&PDd(%f8Az?flGvgچA]~_LV}Gvk~ǮnWqNOꤽV:0%MB Q 'vAb(Y7H>!T.`L-ߦ'&o^ ΠQ&Fr/MR>2 e՞y<.4xbZl :y)Dt!aT&XcX >f5dɢSYM]з5L"~AgKo.CV@Ma>šӯ3=VXѢ%j6/)?`׿l7Qݧu]\w(;y9bm]OfkQ*h _15VkP]eMԺ$m|&7 ț))I]:+ր*::=kӾ{T}z zJB@V;q"VZ/ :t)i-dd_%}m[6eJʑKYױ}F sqJQM$>蒯L}y{ǟ`/L' )@U?7R㾮m cۚW|gCAk>.}Ya16MAיJkP-4t9l7gw3}&f o' Zl6 uNn]}/[$B2c?EC14 eB(vn ksK^(d3O !P&Rxz/і Dnt,Nƛϵxg'OnܘBq'Xoe%}fqU$SQAXk5x jwvX+r1W8$VZռ.-Qw`7Ss'B}Nz|#5×} .Yzo3qWވueZo"ΊN]TYtlpp~Lò8yD,N[Ǥ| (ѨURPJE3,X ;>Ig駳`s}m}2s2IDx4D: ͤX&x,ًz_fH,zia;?Nߜlx-봮8|]<6СET)^(@A61iYɼF:]bs7tΝJ[o%$,',)dž^tršdPk<:)[6`8m£kW(V74E2 XCN Aňd 8Rқfc<9K _O*4'n$!\1)X . bKPq96 -bQ ؐڹV2- -9!J 90GFQ,H3(cv޻1q ~>3 ¼#GlS ʩFW:VghO%B3bO2!׋1/B=s^|9հ:Ϲ܋]m,*w4:3yD&uRF()$B|Ô^%;w/iOf*GR$FHt7_󀜥ɛ߫䲊/m=:> (&:9Erxd__NOO.]?{Ǎe` 6#QŇ8Kp[ N~mͮ^ñwU<444Rjڀ- ?l_VɪjB{VWQ■jr…xʔz&iqd~9/ѯ2Ow`«u& 3+avWvmWxus %iGKMŨn,fN]F_9;]U9\wR7ՃRy?.}rrVW_~^ ~)d 껾Peeۓzvz@ׯ5曷߿R}Ь݁XM 7W?(h*oYvE(wY ߦ\e7{sXҾΡի@Z67w}\zÖY],ͤY~i͏n'+:\ݨNB36~ONSk$\d~X 9D⵴Z>q*96^+5iZlza)yhY BEh-֪@b4(]cӕ@*E~hrN$+~g}Ċӝ^4oŬ*[wٺÇǂ]뾞"hwUtaZ|F*gNJaBm$/ FyUKcM˫+:DBZ;B]yᣐATW7FF75ԭ3JZRBI;t#Xovs'rt˳4>y揣ݝ#곓c64n1u4_#x/v˙L [1ZWSސԴS>ڦVXF*ʧi^9D_y4D_}eC{!XvkFտ% em9KkKKKpd'0'zK`N͛!L\ 9Z)_6? )k7xǻE:ЖѶBrk6U- ;|XMRINʄNJD&kB}1jPZY55kЪ`+[a%ѣEnW%V3մzqNf7UK˩;Cˋ5[Eٽ{jjAub\[GUa*l@59z[ӈOwn^b[*x IKpc\meպZ6N6:Рjk;zݴwv^kє5 >Ev_QZ N=N,sot/7JFkS)MWjclc#4]']ɤw,UP,b[>]EgD|( ` P#yBd/ɚ| S[pn1עM)ja̜>4,Wm}mbtNvT%wmH hG0-X7m4ʒ@+e.VR'Ae+' @hvRJE9Hd7MF36r*hsFUu"hqPT̚DM.3b^?;P:Utm`*,v-Ԣp6֫؄iCqVJs1htEHj[dz"\ F\%m2cgqD`x9pӘX[c/aAڼG;85.}q ǢJy_S*Yōp_*HأF׉\ §.WL|ʕQ d$W̿"W \1"W#+J< "oOwh-tajUy)r_'Q =?_~~b~iCw N"wB@^FjmO?8L&&\T.2mҥ.D)E*G)`׍ˋ<@pT'V5Sy,=Qˎc8<*vyH]`6d9gXޣQ宆KYt`t' ]Smupys멋L}?}"qZFS4qq w-rZbVl@e32.\O]s ?2p8Velqm62L뒏Ăx>relz'Ez[$>L嘒:|Aˋntp(]gə\EĜ~y1/dz/S?ڟwly/1&׾Fծr)tnmZuo< Wv.7 k@''b}1Q/_T*9Z#}_C,r_/u?-;*ܠmNߺb)<gHŁ.ӳYVB?ImwTOB T}lj:rozklMtEhm}:֮ ]@{q[8fy_N{^>Z/`ޞaM^ʙaL^Z$Y-}7k\I)"WLktrŔk[\G8n^1b\йӦ/WLi]ʕFiH؛lp\iUrŔ\q.#"`;~\1M~)]12+V2+"bZS)r52~$`!"\PXWLSzYjrei.գuIEؗbO[ 7QopRG>Jwr|~r\2uX5kn޽xP֡b[&.02yyt1l#8 r%SVXiPU5FV^WZmjen!AbR .F | 08;=lcן(=vS:o,ʺH[i <92]L.>Y{rLiuF9cAɌ䊀1urƅlhpSj(r5BVuER qu6'hֈ)m9A|mnOX-z'Ay\ŁCfGZ(C\6X\E\1Ѻ,r5BR;[\13g+"ubJ)\PS^bFrNf#W볱B)A\3*ɽgqE׮aY\R$W lU6rŸ^"WDdrŔ8c+CӚ䊀jEMrŔ\YeZ(I5ϸ~O|vs.Z'TnC';&eˡS<Q|3fUi!g^L #陗N|\1-$̋)My|o=G87\0t~fh@?PJ!{ȕ/ruʒKr-WY3ȥקJRm.=;@;i.=Gʃ=\z״O T#`?x.~Cg{G6uS))M5Eqg&J]ޞԋJ\)%\Pu&'Js|p H]$ EF(W AZT6rE M.r,K]S; \I 2'b`c+:J?ZQ*)\PcJ|pƺbZ-S+v5FUPJ+CNBm:ҧVBNV%+eiL3M^ҕ'ip",OuҩybjӃclZ꽰bSjY5lF)>& GCF qu6 OL>u)mم0Fk>urgq1zL_aJ_`\i]k'rp ["ʦ1ZȍǑ$V ĸL\1n>OV'œ)H>rlz'8~O3ՏV AejTRER\Ia$`6rEJ\hPYzQ"W#+%\_?\\1P`?֮\=\)cI"gqAe#WDktrE8ך\cӆة55_MrIO@Uv`q_~7su꿞Q^:uo)(ʿyպlF.{pxxRPƫJԧH8⟺U,N/NΨ.$yG]VhЈig7(isn k07+nn>pvr@#|%ܓym&=ueKuqˬZ:)[]L54.HH(1-a~>x{s |kL\ޣ#F2|WۈΈ i߿uuZk4N7e*i*OӹhR ij4[Y.yw>GI{]=f4 \0S5tvK\FABxٚCtme'Uc6P eAWdޡQku]SYPtQWV-ZFܦP UըѪta#1ovm?B)nBM4`c+S+/}c-u t gv#94%F@X`e?aƮ B^%pgɦ<څ'WI^J:4f>yDV,*:6gBh)EL$?'{gS dK 6umЌv4GhѵѦd鱁\7o^!Va8 pJ@uM+5RœDhY{n1;Lf @ŦИU7T3F+PtRU=QKʇ< h`D~mOu hQJAnxSd,4D2R 0@>R;ƈa4Bf|SHQ"π^l)'|oߝdCU,Z:d %SR(U-рOBr>=7B<A޴\s9 Sf0j$7.(]cpπ^wF8'֤@{ur38:Akc0mƀ> c_Ԝ`/DmT/ *Z}~S_ZhI5 I{mg͋شC2j(UÆV9[0f %xD ]Vzk#Ҋ IeohBJv'S B  TTPtC[B Z 4BsX?ҀWil5 "(e*9<Ī7J|Yɠ-]^ [Fhcn eeXLĝd Xqk ;: nhJ}/+*A{Yl(EsAEF@n7((ʡ7f! Db$ۃB6rZQ8b!z_4yA}p#aPyrm/ĥj*٠db̠JUYa:iB0`s#v3T./|e);sM-x bcgG]`100!ƻA TxU2GH}1[Zm\U&1:#'`]@EE ]ʃJ I"92e( Pti?X yy{:%+&CFPr#VH܁m CK.̪d!:T?Qy'*SL l;V+.$ >cӍ9;ׇlEUv!ȕ'X1Xk3tdD4Hcc.u%m6@^Z@mDBMuj WH% ep7P@R@"pc2Exg@rFEK֬kdž@ f՜%C8˾0.r}%)6ح6^C7i|v;˷͛UoεǷcm\dWv}wq6X>Iͯ 7ϵ+m._~~:SK#:!v]>;Ewm]oW=h $&A*NÉ' borr@kc\_('!:?'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qz@>Aw/ i9N ϻH/ IavN CΉH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tN `oܰgmz8 dV $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:\'U1z ''bz1N FK;:D'9 '8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qg[뭦:\0]^VSnn{}zCvվU\0@Ϛ%=ˆ{#F( aItõbνtb> ] ]-o["nK+_:]1J%g Zbz1 /%)HWĤ{__o0uuCywٓwˊW-<^:4r'Ξ(w yZp;VurںO?ǰ$U ,!s}kvu/S MMSnr{zeE^yw-F u;FG]XejF⭟|{yxusmߞ֋ePzZso~O-th_\Jߞt{~xs~sӹwCW/O /Q9wT](Ycl-P\'%Obg$(h͋_Dfy83 E +/֙(PGBWz{8t8ߢڨ>]=ya;# ]=y?G ?_h5Ape?r9?w{~e]k^U4}7ק{8:Y]Dz*}+zCj'lrч{އ|_tUUHt-Dt˳꛾g~m'M^5?d`V?9Mx k$wN;6K(:8ELC>ݻ8ޡ}fZ閃6 ǧgXe1ߴk6vkpw7MϵLfD;*4{"gfx3I5v#nJKj@RQRvR8o_]1`b& (:D.)DW اÍYZQ BZ]v1tpi1|b ] ]9 -`kS 2\2K+FK|bIC+#%bp^R;Qz)?AHknaΜP+zýAjV߀޷d"6i1 S>X<&km('?ͣUC39F 9v1QUi*$pMKAFF. 4f9uj) Wr2 `ch1tp[ ]1_|N*hY]1జz 7.F]1InJ܃= 81䛦}36|"z{_]#芄::Z҂]mo+Bv/E6h]&XQ4]Iے,9c{&<|! 条vZNNW eۢzz"k:DW$!':CW .]+Rў^ ]Q+;DW sxV<ׂPҞ^"]1Me ` ]%=~Z\^ ]fXu]%*HWa,4.ӟy3j^]ID꡷=] AhYz8G!RGf Ҿé؞NU'>jsĴK!ݐvWB+ZQP*4Bwpl *,BW cqtP2_]=!L)ym̞}apz^: |IPꖝ+Cc]8B)Jp ]ZJq*䤧HWkDW b;th nJ(]+, ]%uZNWR< U!uvG]Z[OW B=]@iL!Rvg *LBW -mmo{zKtk UBKU*䪧HWiCZchDL9;"Jl>K!w'#՝Q(4PoD_&M3yRFm?OꠂڴZ{. if_ñjdV IԲ6׷lΓ:`đS鹺$v *G'_JjyS3wgǽwށ@aaye1>}6r@ }A2=^*2U}ioߒ!ZvQ=~@f<$en^?ag2T_9)K@w:;뷫/(݀̓kiكYD1II KsV&~&d(6؊-6](M~ֶQ,Mdt^/DžJ3:}~)Bq;Btř!\ P0vY4a#A2 [-f"Z~0*]OAZCfԽ=B}P ~O̸?Yruf- 7ô1_Iw)vfxUTHgЧ cl)/|p#+,d>(i~m>{!I͹6%J}?*泫mt< &9Rʫ ic2@/{Ϝe".zմA560*<\O5=z$ $^M(ˁg" ) #Ѫ&oQIeVSdRdA4R;B!cb}sVA-m=3{9,ƭ:R֒7~]N{KVͮ%buet%jQW>ݜҤ݇4.^ٝr؝SOkǧf_*RU#+9Ba"0^?JuJL1-ZjxpM|%ٛұuѫ ؎7?լ&&RR"]ƈeXc3V^>L-u2HVTVr Ő O#K/W>F S CA2 Zu2LwZ D佖豉hj4BZ"_26IU0 |>*g47>7ev41ӫ3 RŪ6z&nQWahp蜔n;Ŝh$# Ccp ;ۧR^3JϧoG+QŖL.a6|bҁ.biTP!AS !ɵr8pt>oxJP=*g &H)IƸ|DCvhJ§!*S܃qO /_Lˈ7Q?Ɵ~0r|;b?iy5;c 8O>M$o^>uE"i-TPnZm183t49-oNG!p #>B!? L&f44wT,.'÷s7~ QW5Sކ{ ,8 CB#_.|>IU^@V %'ǟNop/n2>YL ,nXÇE3`uYH< |4~XJWyMr'[n? ˟Ld! y/_(ʾPv : Tt ,p< ̓y>+V$-\m63GZw 6̎h@s{@&,,qκ 5y- R3S/1he5S<^|/Zedtk1o$#CXp{L:G.Œ 6˄ igdKgc Rz,WI%7}l\S`ivʦ6i@41K=thYsz!U*nwT<`5D2`PLHsFWqL->__QC#x"GWg0iaw<I=1BΚyY{k>+|1B:mޘmܘ6G{OחYt>-'@`4 [U^g}QJ'i5JvsYwyKp1o nY!T;?8"5I-dVYn(RqkX5AgFF$A"}C.lv Om$۳ჰl߻6-[6bcD4lQ7^.f>6q}BBe,YJL&0">}BɆ#3j1yȃ(Kfb%2=y煠O0V#a@ʄ7O->#ېgL;&`5yrGL}15;Nf)#$A@.t ƒ.IF0B6A[zښ}EnEe:κZxkzzvs9gze>23k'|yzoNh1~7@^)%LT]N| "N)v@O:՞Nwu^N!$)FI6hNЀZKQJijME YNPEb@ipƘoH"rH]%ŠcRƨ9۵6}{!D[~-y`d :xC&ӈdu.*/ R1`5!$6EXAFDpI:l]{Sm$%6fl~t2~Ni<ܫf|kWո5k~~yv:^J&n>͇̓ds޴kI 4[ Y{FaLD F ECJ Y2lp|;j<\ḿ/9ĞlVU! 0%$#e`NGF +֚9kvX.l Qu5]h~Q>B _x65vR: U$m6E„` csDO3ǔvu7ކEV:uΣ3x<}aN@(2"C룎V<4p1Ek7]i͠1ENW"svN,(Je8.H&G5Շ:_GXiCfd( {Ѩɳ$)R#(~!\lN5غ`9ީ_WŸ/mch8hͅ]0D ^f,P & Ȃ LwF pްoLhRgoTz&`GdOZUBX#6#gZ:<<&֋d}u6]kݠ&Ui4002Q5)þy *.lу^| /w}c}cǽ'Pa_PmV?\Qَ[$ ُ/⩲Wf9ˑNNT}[tKIuh}݅쁃`!i]8O{ PMN>ǣib^ƻZfvv~zÿ|`}-WF%3(ol1TbL0fhCOMp( "ZI[ F_?F楛9zNE:w+ݺZm|T)H_TRA6&/%Qf]NݎrÚ䕳s'n(' . 0l$Lpx[F W3ZhPU5FGkhqYjƥ鄝NttIe9o!zP(|ƥ7i{Ca\GIYA:tRPX)#ڢeM(.]vu-P{R"ky2'4)B%I1MSADn}،-CWtn.y,_,b9jm\DP$m B IGlK&t 7x_pr1w3zaF)ς^а} Y/ x)=KErV =h/Ry@y d~~:연xm?:B8SGĈŇA%Zȳ]r`a\i$(IjGKbDM WqyƬ4⬈I XvJ-pFɌbF[?#&Xqgr)kNG'4:MW݈4{׋#Gb<ɋbZ#5pD>FCΉ5?2T"o0;HC=F^Օ>SXrOǏP=Do{Qi3jg5L=s CJ :8Gh8I\OhտEÜU0$QJj k_Gzn/yC?B\Dy%_Jjw~3QMcP?|O+.yzMÛ/u ']5AkF9?.V:wPߎnݣiڇsNj77}bXK"$~Od|t[FYmż?({sW;韍y֮d+[F_صd_Y3MO{;XwZrѳ5'WUb󪫛\{ FUgV2R>==3_eT. zӗ:.l<]\Ow??տݿ~wR7ݯ?VOsB6 H0I`{Kϛ_`ih4ZZbi+[r÷Y״5ou_Ўʶ/~濌gX$/WY!m=+zGy&dq&2p=UCA_^c݀dw7HIiJO2_z!r҈>,C!SVFPqZ8T^K} yA,6d(ޢšMQd/A'Bb?W049יl_ƚNecϨlvJYts5X5dLa=~Aw_,ޮ=<>Z2t!t%%ǞXZvj4MBT`sg*l ]xFSR9`$0B`GeQ+ 2H(N03,rIzF)1Ajn9ۗF\M6V_F({̥>7 --u@cxR^h@q@G/@+! q<3_ETͻO9&Y |(Z#)B,1~@F_t|[t]ۡU5TC?@Ҕ[C1c]WͰU^'MN ȣ4}b~dF?͛kA0ȺGx6~ToN|Vdr޸aUp늿nu\퀴^i@<bNh K˞MdLAcCw߻2u8FGҔΠ])tEN%OC/o ugWkZKhcW$2] :G XkWu>&jpk}Z}P+אYw [nMh^K綄z@H)iN?L]k,s :oBfn v>yj\!~Ȫ QSG2Eˀc^ Vz6xADAZژ@PBJF* 4mgi} (95-wuO*!JRrh -h5 "URn$d'>!>I^m+HYq)keqP]d3DjGБ_]mB[ V8$JZTk0[D( PK%(ʱ,}L6(;N*w~X4,3:HPJdh5S2hB=? iL%<9-: =U_@־AC8{Ѿz$1ۡuח>}A8h_.k;좀:v% qHdyD);E} x=!^$xX)`IWA9z}T.! ,x"|eMb9;Xq}r!ڡ۽0wb~xV§6x,bdIMEDo]s7WTWuZU=wƵq]R "Rl_EI3'5tnHN &֖yJ$ӫM[TɜURu88[dfQ^qFU@$@[p~t, N5Y{hw;(#{>/x8]!J%z:AҜjuݟV/y\wt*SJF=y%029|Hxg$?yzKA)Z| l_A+bHשdŋ?1e͒?kmN8[<$VxC]o6lsTQZ>ήU nϒar |RYVz˟G|>Fɒ*]rmԷhL(W.'򐆈|1LC!Z#JFS$Ԫ}1P᝾ybkҸ3b:mV'"?6/R<6gBo3 w)W$VC{WǶLl鼜^ɿ/b?Yr?2nL3,7gg_y;6!AVjw{C7]%Zq+vWZW f0sstժͣi^*0g[fw@+NòT5,K] Ҭnzu/٧GFJ _Ff9$ҕ݋Pp5.~[t%ԿUMp cuhc콤\v+)QF{xكٮa& n|PTPjiorjwk uk@VA)cyZ2Z>0˴o}uRisQqL۪p"ZRa;\SyY|7:]{OڥH{z2C,n?Kg/o练#˯-1e`*i4WGft=5QkN25@Yk%"B,Bv&=] ]iteKTʔ;$a|PKڧA\?>GHz B^=v I}0XDE!\M$heפ Jogp\TDDWpQx!,Bt(+>{LMћTO5kW#mX24CٵL2=]+zJ4"BFCWW ]!ZfNW۞N"+lu4tpƻBBhRN8# G]\It,th:]!Jkz:A#0'DECW؇5C+X QV/{:*G +b+X,thEAD)iOW'HW :"+ly4tpy< et(E?vut9'EDW]!\ ]ZdӤ+&tWє!CFj+  -a0r ]pE;I:a_FV 1CvyIBROm}*ύ΍,ٟ 3pm4~Tvo@p~F ]`%A:9/Dk;?E(5]ٚ"(DnGFh)9Rv3c0te{WTpMb+E4tp-t( E 0$X >v}8]!Jmz:Aje#+wHWWX *uB][ 7+!9&"4" 2=jf(J]t$`Ѳ#JHWJZctDtx J ]!Z-NW؞N4zwM(aC! q ;!E4g4$glUTt|&*kH尤^e9ƣy)pUtp)SKobz^ntv-+DYť(\Q@Irs $9s 6xYZ bʰhOI4u~tQ~tO2"Rh *Ib+@YS3SOJǔ;5ǻBUDtTJzd@G57hˣUC8tN‚ԧճ=]#z]+x$HGCȓ/P-f ,?8;rsx>$/+~q2<nMdDF\QHN}{(t<s:ϝPY|o:e:Tt__ϳsQoVvE7vNA]߬ h;Jp;tpIPqT iBk|( \9ʝ!)B }Hs;l3hBP,<;K@rS9KVlr BuLTz ^Ni]rW_=x hyeX(e"ޞ[ }X^/pbcE*3'RKq2y!x,hR>s*- $eY^\7nW,+B]f~w^O#LK%ުbUV=}?x%m#RCO,zT 3N_Zt  T&(^K3I8@ ́ F;n,2 \, S9*I;T1t|g/cŔiBspȔ YB,c@RȄSSywLGJrY+n nom5X_Zh 7]\|r2,LIJcj:?෫ju^/-k?Q۪ݨ44+pɕWi ;>}'0IE(+a-ݺ,W05EՎg` Ne{Zz}ӂB6SGzmr7n>= bp<'L>(Wyy(e,7$೅rD{ɵ ڵjgg =SB0L}:ZFSQ* 8[ԴҮ<3یܣp[R8\]`zK ΂/|oe&% p{/ iWOED]"Xy˃2iYIn}UDS 8Ejj帏8ikяgè25 Qf4u٣hr 4SKes-N4 3-+rp2a;`'Vci 0Mo*f*}U^MUFTf2/( ܊ўylOgbiȾa[{ Xg|$]?gTcci]>.Yk?ޞm*0?PP`ƫU>u>YƢ*Lٷ#fD;碱BJY]IR(IJRd\3d `1 /u2kjnZ}"tv3BfB+jWRV_i&4'urqqlj'105ONRh!?Հ:ɇ4k XAA;%m:(/9o[F3d BaL{:FmseUj+cS[(+B­¥DަɁlsﯫ\Lt0Y?4~0}'ӯb!=بrU!da i( 4dm&Ѫ4 y22qٔD 4xiL&nǬ9bfگ:jsaB1jW[Zml30z瀉e)o6l$|9 ~Pf2wڎ(e4NaV.]:#PpbM5(m^M=.}ulm.{58("(΅ `REEӃK%wFK.cY C DzT [ Tsr,ՎW灜aiMhC@ˣ5zsr:_,XshiѨ옰,f-h4O`tOi WL_YtI) /2uF09y]dc9lܐF'E@-U{!ܗchgM0Kt< KYw=CǨMKd\1N 7ABy1ܩrh/I-,0v\Ӄ.S4n*rs'%1~|t ڕ]=?ݛ}6L̛wӓWmaJZ|2[.Jiy~}0z$Isv$PG:]7X? \],oIG,X!{%|xo4߲cjZ]? -?G?oo>z;:b$pN !O_054ZZxЊ׼?d\ykƽ>F]ھҷ $O矾X .i[k+U0+$6lˈh rr6ijM% XCy wrhxnv$Й0HMHKC&w!CY[+y6B'=uaaV{>cmr Zɢ(Uf٨(bI ! p+ < FiTy3پaT[a\G:6γTM$KMꎶ w6yQ6~FicB8omT&hb>V\tNi(LVq9޵&exPV2z о4ZK*Ip;{Q)}x̏ᙃ^i[+H߽YgxO?wj_ ƕ;p,[u%5{n+l;3,V Q̣.p +8'skӮW/fb#^LGLSѧC?h|([_^c|p~IpqR}pJzJ$("VZqM%|-xr^eN@ K@o\G _p[^d?;+toI^N_^]z_m[zyvֱf9?Knh1A-f݈;p23/i,3Z^٢G:ٜGw1:/E=>K>lgF n'.H~94Xr^W *#15AdJW8 v|.x{a={~X疞5(gT&֖iq%HH:3WedV9knmQM1>wJ!i5b͵Fv>wt7T2˫Cчasүm(e^B1u|즡yN7&߀Mq^b@N9LCyOڱq/6r6ڋ!heIR*%ѱQaNȼqs0ʆ1+9#gkw,NJy~iʷ#%a:Gt'IE$Ys^{R -DPkt(nlDaWƆmt"XaJ0,`bVXN`Y%n|pBrcuy QJˀ C;UPdX#[t̃+lT839g ;еo8 +ֲ`PQ8P278eϏ;5}lCM(7a"`%5J$QT!ǕUJkKb!"ZsQ4:mg Y,x2&n42fvss9;tUx䷧b l3] HXGǝF)(n#+-ḥUh4Bk ( F刊`) NP*$@[lkq+ǓhEً_IК3)Jr\I$Y2*z@I*sc̱|6ۈ&YGH(򹖜N)bxЄɗv%uma ]Tuw)W( 3A*m?H*|7z<,p/:${#mL49%{&ejړVtZ'}iAh" ) #Ѫ&*5pQ!-"0hĥv ۵cRNq,1waʽ3ø)aEزN5rvO^C⑇i<]XJΚͺx?rCd1ΟZ3N3± ëτW%}!Uy<oCrF(w#2v0s5P'k$(d8xv)L&\52L$gPd&#QHYu2LwƖV2b=6MVHKD05rv4xk0)QbAaJSfY YG[kp|Qnt.DV ̈́cvK){ӣl.E\Һlg"76MAiMBN~T\L9oΡv9T;jonnitQg2ͳY65s׫uf7Mo*oo߽CK57d޼t,y3]y|`l3F b4ßWۢ!PLyS NHoCH[w*PZ,InlL9_^Pv Sbk$rxvD-ׇ]#Q)Q]3̮A5H=UH}EZP,A"AL"fQy`SbqOEzEzȊ{; (4~W)dQhCEuTȂ@5J#`2d4ѺFZ y <I01gdzyJ.׏دOGFCm.!JibQ11xdu6) 1J"c,AG GHk^c300Ͱr`rwS`B =JCPjO?ߠ9|!ߦW\Cv;fJK ČlOL{ 98T ΜGMc<0dv;z:8<"ǜ4pVYBaZ *"DŽ8`RPR Lw<=iݟv,8zfv= ћSr"UpA*ߢ)!WʱL !i:A]!ohӗ6hس6CCۈ(j[-w'%Ot (d_*ߖ6@,iPLmcTSapQG*2RdƛlﵠNJEɔ3EXh](E{}sG mmXv4uX6X\{l5C$I t < ٷ_V&'ft6HoԴ[7}~?}<ou)}?h^gY,[IR d6\${K"\f~ýG>k!Ό竫_bߌiqgj_:]SuZh̰d<(NoN>G80π~ët<*<V75 NGO68Th6ra}1],_jLJ:QW CyW@j";͛r-΁ZFrPq$~,fa?{w9)G6[ E?w?i! pq}?%DQ}v3 @9!pIf$93&sdCTS7mf6"{f>$NB5-S9sÝ/J(iǶh)=P˦[|V2`≠Ih Ơݑ5jI M{Ymh : C5,QN'֔H #H@ ך Ƃ!B1,Jzqk#5W; YgV Wٿ̎!hi4 )(P$PrqV!<'wLVg4.Ǐ~q㮝nmmRIUˮbw.ƥ> LJE\x\XR腻] N$<)x]oGqqriY\/jQU.avq}(.o4|wR|p%K$L>gFq̫pu&Z@˱(gpx ɕa!jL۫5935faS‚i'VG,s0" UHmMijIJ;f2RʃM+Gg-゛R{ox4[-K8tM|԰ż4 AIŨd%'jDrt߰3RaYr;n˞:_Z VNM֎HSW2'AҌ;@#iH&KA}cRP*d!fϽYwY&8!}*0G8 2`g\B ͜Xh8A¹Dqe'hzi :xGǹ33*(!p] g7brxZ9j=]4uCfyC1*y`t|7}.Mͽ zmfH.zێ0#cѤYRFu>&~UqsXӯ}UOۯޟbN<>}[8u30 .m"G N?]]Sv6G]O.6# +ځ[4z[wK+)sݜU^nL3? ;fn$f6܅/Y 0 ВDQFښ YN{jB3 ?  ti!.}bVįE`K=u\w=-<\Dy3I y`pH :vH'C,RH~@!]xK!]w!mJu)4PJ9D ͵CF J3!ZE1 ]}?X6qW;|M*M>\*E&c=$eI)ZI#i>ȢgZML<دk~uz}'ij6EH2cm& DyZH 1kԵ)O٠g/ȽSս^z~>徭DDak6wӔ5 #8h0dr.ɺ)DՉ=}ǯ,/Ke [-#)X\ƤYIGÞD Ylt&tR#LL 6">àZ@I{0"d52yC9I1JLQgqS"Of)&t$kJiV>G'xP^^!D;Z/!H4奓c$$eKX}xs1!)EX#%V0 FU$7 Dހ8be2xj077r/+DeNTLX2R;@O#:cU!C!ZM0f0Xc[6E (FWAzc&(A^p@@j!H@I]Q<}h sٴZڭ4I`DCKeW"DPDId߉,=M$D/YeMVZ*WYq\P5Ŧ"@ĈaHdy,,k|1~n/-]zAӻ>5xpY,E A I &D*QP!ޭQFr袵 [sw~)BQHشh"yGK&᜔&ZET=t%|#zzVexq׋N5 ,gHuF?&dVkuZ']гarF"l _dd2JMf/G95*[ 4#C뭥xټ"K>LȎtTJRYY|zC>}IQ I!i&ޒ&F,Vczs.)'Żhֳ4odyv?tdѽ4D 2xqe3G«iP'*UOLIJdXJj$;VOWS^柺xMpN6^Xh DVSpe_jrz릢68``\."'i |,z8S)E.C|/.gٚ"=c"$ Y %tE@M.H۱Wݙ8溭mssԾQ07a .Vm=[|S`FU$I}ACmv2"F!bŦB*4g<8k D1dREYg)g0(٬j,OW}zxL}u0A_GAGOTr"#Ϩ U̟J!W,IJP{Hy#o*ء.CC Uknnps.xU͠/g*l]%ˠ kjp[`)$j-zE{ƹE׽G.%YQN$TA[aZCV/dr˾7hBA;Q_~XSя]Ղ7W_-^Oÿʴ_F7ô%vum{̀?6RYNacvYxs:^5FڂiPO:{q,:%"m3X Fyx<\h6UCC²ߥ\|đV{)|8Bty^l_ͪeR] /[V5⪊*?'_d1s'?(uG=pRr}v:N˓?swǯ}XO>_=oҩ՟ CNMme5gƋxzd{s:GҰ;2#6(xrbQ_قr>&lZ" n=d=/K9_V8oW^s>dU2ΓiOAI KNrȋ!D>W/y~V %ǿh}~ҟ8O"bSNiw?R-|nb alb2NLWE$ngjnڎ?2q_뇓ԇ] ٺ[-F?-fQ6UL"`38^tF "J[My=Y^} a}S ԜOרкim,ݳr*-Iw[rW]{ī@,ҦJf[ڱTKs@md<&^52T UxBNkp% Eb5z?p$GF:/͑pS$!G4VbHIZP!kn˞e^ѿs5x+D3j hr.E2ր2e<.S|2ub u̫7Z<_lm\ I!˥b_ NuVJ۪'\|Q_V͏TѬyEEȼYs}k2(RҘ@QQC@f*;WkNsmE םd]h3w[S;ľT훗c ] F֪)P#+x<\t BwNkQTD`}V*c> JɆ@`gtdb-:vZmhz褧:/’_ {ԭ֘d_)fK%"hAA&-V^rug,JZ wj]9(Ȣ%JCPb_|[Bձu&΁zէgfZd@Q4)֤(K/"B.FbN#S"D5({Jà]7;_g$FN^D' O)BP Ȫ0EQԊ' ~҅vb"+bDIȻAB)8$K-G G'W:[wzw5.&|;>8bȥZ˨+zS\%JyxYTǖuq5W\GDrIM%uD|c;Ŗ)vд2yK$FR+rWC &胐YjP]tEIW+ځk n&_rץ;PZ QN C#%<Ƈr|o]h`}3&(I h@Cr ;s/to{&M|B i2)eXV1\&b%+P4dwI6qQ{TBݺ+f^kn,vdF1 A>@GHbK6k;@p^ Tq@@ 'o­l߲)I~h<>V^(ۗc1ECw>}PZl1[7PR&٤P9 `I4P=Nsz:}PK`D6HK_bTJwmI %6C}& 1>%)R!)JV /(rd6@1XjҞ0F'.L"*C6{gY$"﵌FMFSi+bVgcxMfZ#D㇞(" nV*(&U,_9uXIpdd 3$½EZ?:xJ1U<gz$(@Selnگuu)}a} 5[T8 .a@qUQ˸ԣ"mN('j]7WTpyPԦ?[o0 SPcURSz`+{On@B(LR{J.W+gi?L6K|?z->R,E5fLH:ⲗ⿋E)x9Hb>__?5fRw~\K_o"TA rH# 냉KM-aAR"$a9.09p'٠iCs9Ab&`śv*6UDr&P<?=3s4OWAId?ja ҆G#DcTSa05`?UdJNY?› f7yO~/&P!>9c\Jpq{z[dyYHnق"P`ycP}LyL=.H3RlM9ۄr=7թ~qjxMh{k>Ӭ^㝞M6sfx>;Cx¯e?ƂaQ/lL"/Oo#㕠Dr}"xR8!ׂbВj]@%sVxT ɕtr%\C&,=l@:9Ԓd 婎F~+N&81&S@e(w\KfKzGKɫf "RKM)0pc(,<Є^jF魟bM*/Zpm(TJ>&r’w<.ob>9t~{fHsff-c/AL>ڑyPCCPR1*!$Q#4*1,3),3 Ғ^ ^Smkj{|VdsvD@$zt dNF0@T8$ر  H&>e+*'Uۅ\;~7'/|~?޾<śL'o^~Z;QpH`xc~ی7{4j7M[iu=૴˚voSc'5 _.?;C*^l}hiTkWIFW/ |՟Tx*5;B祫k]o歉{IAB/*K*!6B \Xmd18M C! (ta ye?1Ķ`#;Җ~p 5XQ+-#o%:*گ43 ˋm)OUs~zBUa՝-c9YT^ZNZSYfվ-=)#uGP);s)Hr~ VV~4%(vB0 PEqtNx5{ʱ^$U,9XezyOkZ4yjw2r&##NPfpT^ ?ufuyV{|)sQ믊24*iBVy5'n.^S[Iӧ#eF)Ώ!IhJJO|C~WçEC?#h7ab @36/ld)Q%u{JV>y$B58wspBɕNৣD"4<0Km*CA ƙARL1jчfrʹ\7zuo "8 h9'|DLK1,ڙBF[&DCsY4a#A29HysQ \4cmY&1a c])|RkOXU"ylRO͖[#J0B:sڃ!HՒx9gI#[<:R vl$K6;˜[FXFV|s4ykYYMTn_P ͅ1Г 9'YeR 'c Et/>^1n01eS_ͻއ^SoI;-vb_dJvtϯD'5ǯkkF\7zK W?۩H2rw͚v+hXӿwN"%ڙ:\wN"P+:ʙM:NDSJU2Z?Q!ƒE*L<)-)1Xu XZwDðqa!nxbu{j+$36%Kx;E?7ʴyFpE7n646pj~>H6=LUa< h>D~Ť+.T m'ލ m?-D}bI#I&G^9-ڏ 4%mЌXdE$Lr) O i M[9ވ?bOz=NC>CsF(u?KybLɬil` F\u'H:<= ݉g9Rg\+#)X\fWj@} K!NZVcYbqdlYE:ԾpW[ހ8bd,!6n!^zL̉IQ@ArOSHcUȐCV`}~jR8eƺmOj7hBkt%Qed?mm; ÁBByzo)f;l{ugpVP6MR.5:hٕ~*̄8 3hYJXdNz!| .' XSlB4@Ĉ0{fk>eO=7+tpL ],_U}Õ-H6$K2PFMD}" Fɡ&n5)(L >ڌBXDĪ|FE(>J_2 4*b* w|+rK8 |:]]N7+_Ϻlvz;|ֳW#y]`cZG,qc^09#bMY6ZAjD%R˧Vƀof'ҷ+[hlyx"OOÔ *kojf5oE{o"58O&>IN$j4j0^2MJIP鞢?<|ҳTʞ,gQ`R:/PPJ0j cB>zC>}IQ I!i&ޒ&F2-Vczs^>([ovPW^;?M^9_:2~G 2xpU ךmj9Epϛ I:xa2YM>'|6 \DNDX&ƧRH{ ! v6u9S/0P!E mVu@JP% *P&%u92ydBȄEܑ<5a46{q?Ƭhzng@̐nHxmflo=-=ÕJ7/D%mQ6QB3&bo@s^fz|9'nwYƣjʆW6kn=~}/|=m初5c~}s>秓r\ȐGlN_L1\WzOĖ57YzlŎ\_NIIwVڡֹch-|CИ`򛙅VEVf)gU*g=YhJ qeX=BkMʻ좋dv`d.FXRXT'XTX\39y~ZtJ;·xυZ0?Ζu _6=5иwo=\ULQJ@,.uY\JNUTH^#܋Ff^1B$j"A|xQ`‚1 j̿I`Nz pϼ Fv^qnӕ|k0XaIk1y}#F;F*23FG,6JUO7g:_N٩zz4@#M6 UQ u{_2zCAc5OWƕ_(mUg#k ٣bjr"#Ϩ d.W%ZcYAR$I5z;:J =!o:!.CC"PԵg7 d$OdGx %^x-:Mǫ mr2((jڹX,b|3xqW_2/UFAE}*i|#SF|̭k0fx!d+\GAO)SoPN藃V-XP/Zٯ圍՟zʁ_HM^MO+a>90i^n?,bw{r5F3IS3ln0 ?'i6k?ԃ9.x/~Av~Y_"qr:'*ŃAUD +J^[]ʅ.p ΌN dպg~EGbU_ӧvQ&Ԗe{ir:m8φ댩FeJOk>caXy9k{>J3+orpͽϘuxylS|%?zN_6[*mg5||4 gҰ;3V(8գWt,M' ;5-cW3XMV2t9qr2K$QL{ͮ*L wf"5!3շ|;b(9i_bI3~^e'Bd4޼΃{yEn18w ~OY8%nBв` :$2Y%noknڎaP/.F0"VYQWHƬ^szMm#F JsM<~+ :3{fFհnVYF,F֪&6T5SbvjMHr`2]פq[J#^4+o0k~q^Ėt̶WijeeW<vjĔ?N˭fjk=CL@C0`zcLպŽ K~3J CvXQJF)z TR<`-b^ T+ʤ/O7Z~Yyp%xk}6]z3Ϻ5) ^HC@~)r$-$> .y8DSM;ʶ c(1߱OKT9P&aw^$c h,[/S 1EQ '# +N^X^5mx+_gQxZ^\}%>Pw!.&9Ahա'yõ+ay/”WBGar\+;_(__{fC$[{yF(n7:&fk]%9 & &(KBDfn"ۙ\$7ڰJPҦ-\(B<亶ZXb4ipԩ0N*i32[j];;EK>;ئb_|[BձuFΑrէ' fFU KLjO؁TZ,991n(O3N&A)9t) z_g$FN^D' w$R1(#8UAapLQzԊq'~VxOf1XL1"$XixdkP D*AfHDqԣxNGvQ},K)\uEoKRڸ0cdUmXc˺bo.v\#"]^u"w>߱bKu;hZX+%#)WJ9ܐb@aI% dVţ;؂\:C*رts ^,a4=64m'lL@iDM;$$&lMn)/n}y?.tьBFgRMP+@:~Ѥ~-)7Ggf\ɁS]Qa66L ?,b ZK,Yڧ>Lb։8B%`:MV9ߘ|zkiֻo>}B,fhsh=e;x:Лoc#c}0Nv_B2#T8mwꙡHI$ţ)Cs==U#. պ#%.jjcb#}~e }`pvY oɾQmL+V_MTqC>U%7iQJf-+M!Ϊ֪QOeըjԓY5*iO"R8M*SCBƒN*B0H8W"bD9DŽRn&x92J#"( VJHh̭e#q> Ӑ.it񃯏_ ?}:%hzCS}IBp]e@ lR9dO9TwplƒJ3_άМ*Gq=7>s?wV16ʢBao.",*uERGb#E \f;}HuvMZيRUue'շ&Bߏ!@c/CS1GdWxΠuJBbަ6G#mKi+N܃I\u:AhhAʹ5)T;J11XjRNwXyJTƼ/ee&"*C6 X g2b=6MVTZg6r֫VCRCyNj " nV*Ќ(5*zfhtyN1טh,@lH@V IM ]QQqj: $PNKro6f#gq(gAgm *]ը JoI]t}i9\||=S]챚׏-JVT^T"4"ƌc9CAIZZp BX?Gq&G 7pԥ"@Op)#jK ^Ff"TndFȘOWɆ8؋vX~ӗv/ *l.A}'*U7 (`08W/-Q$⠊(IrA yk~柀P1l<3";D\iY zZ81 .@@ \q]P ϴBY)lVD #$$8pFBb%#1)R qIzt`I&9bˌY#W,55p:<$_g6.Ef\.vGbHLaRhL #m0J[N|cov"Xwߌ@/ަ(q؍\p~n2_jߌ&unTVarQS&] `u-g:pxWȾ^_xM_wR-]C"\[$л63\fo囓v'ڭQIUcCvu7W_JLLkvz郚^-Lêܠx)q=6iEEJN(`rpcq3ߒ.V ==J:7VrH9c t\29K+^IxDO6K|kA0>S'a F8_R-8e*|9]nLOoO!G/ۅpUF,B0#Bc}0z)#"b1LV 1饮Mcg[i0s5hC=pa:r,whw_O~|쵲62a^x0A*1j 3ꄷڙ0Hr|܄5/xYvp GqIKv4Dqj)&qITy\B 3vDph*5GX*QKUBl/~DpKA\PÕ$(rƠsK-tZu\.yx=gֆoB0bF|-R aCw~yM1MvuPP 9f 3#z E5z;tppDz_4z c1}ߜGդf.ʶ{Oc n =X3QZqaXw60曢_]ڷSVk`|ғﱥ}%2@JF/tJÃ&]W&n(- t!mn+EǸ.8E ׉^Dd$RjKf(LtL0&wy̺'ÓB"o2ze,!]Zn5.Oʠ&HrDL>K.XrZݒ*_%RR\<;=pJԊNT.ˁ+'/?f֮6#W?s&Ԃp=vlWm=X̽c'uG[\q.B2ŖUsri:GrfR })\`0Jb}0x4Knwb%䈰9,`s"Wcf>6'*;lJY0:֎X,k}'\W'Eñ!bQ*F)q[LBh"S: P0whN戵ٳ.AǦzht_]=XTm~4"{:>ز#_9Ӽ|x˙x[:L1%Y8LC!kDژiFK/Y)J^Hr@zAt(S.)AFU9M pHR\D#.c(xhU*$o[EĖPƽuH ÌМ-rF\ˇԒlW)iry5Mn[F󗗧`&%ЏE:RJIӾO̩vm&]fqrtrȆ諂rk2O̠jŭTHW2"E@#SSVke~̩NX5*d8Ty\t) qc0Es(ȀJlp(4tۼ4 Lp{>2u{ ?u 3"WV%`+LRVOfP t6E K]?.ѫVLz;f2ջYO#K9̳O޾Yc湑a<ozu-9[:oxxwгz:ҐG cNpVYBaZ *"DŽ8ʐH $' $uݸAJ-ό_S :52ck>E'P1&i:C'euBXXKdv]#␔LjH)mNC|'2ēNڸ/_[Fsx2/C3Eާ{NS΀Y_Ŷ$*8ghv)8S~R1i*}pTtW**1 445$i񟾮@Q { dSfmF["߻49~0lau=Ns4 `̠YD:R|2R$% CwD]AojYTy&E:|U %py(?>ގƓޥMaB?ä+2==z 8ӿMU=G?}_[q1\"<~Ճ ɯ*+͚T_OIn K(5B`9xơSg}_zU|Xi,<.%.TP%EHx7qztŖ\%޽^8{.մ_;Z\6-<韛{VͅѸr ԵCmSW+ }>}>NWdYӽ*2Y+70_^{#pwt朗=tW =fLSIlZ;kNc@Z5lf<%{`nN1u99<Gg[Nsӓ3_>xfIQkANKHXh&2h`e*(omĺ{^X$"2Ns,EhL F4 A3L9#TBX8yY;z]A(AyJ>bkiaTHB HzC!1h#mTZE1&/ mbLR􈘊_5q`t B֯uۃi\e 8@#A4 틙j/e{VS^8LyGo9S˓b6՘}z7LKCގԞ!wd Zo4&-f/“_tTR2` &R9E=C]L2D&ڄ<)D>P;j}nz8>0}FdZ>F|$Fl4wi:RJ@P]Q2ㅏ%S]<T':ɡN|<~'ta<_߹ǎJp0dI$H^ 0%R?E)@&Zm:oC%{:mjxvgW E'$*"9lJ=v;Tv3kCc S0)i+-5;G6sIEg $@"|NQSXu fIc_I$Ha G !bHLɅ6qؓeURuǾ4fG|.=2^D8(:GyjF3Y!4'B"Gg1:v:G)uܘ huRgT?sLlV:11Vtp {fPZGk~q?~JfZ_xBrY@g=` G2&̅ȱYhU2E~~Ta3G/b7]lw;2WnxݏV;iql|zf]jMgs٢@KŮ2g: ]FSQ<ҾGMnZO>~:l|x݈1r` PcPt{ 2VP =5I&!ΌM_n5֐[]ڌVeoz=XHW +axIR Q{C\&O~x'y|t;;0evf2ځ x~\A-JV*:H:tAe]@Skp\SCN^l畭Oؽj5)92B(t23F`AFcJ,Vr|l;NEdꬃ-#aBj/EΤ"Zh&COlʼ6O޿EU fDWLOZ!^'clS.Nc3Tuf0rRR+A TQD)jKI8UCapJQ jxIOZiw "q8`jrRsJ4s4Id G#|l{|\d?}GiUJ1TaԕxŁ0cdt]/jfseDX)uux?iLE5>ll|VR fڤڛmWUZ;ڐSDK(iQ vt B-@Cհ}~~(PH#0-aR&U`NB'hqStE1 h7>[l6{ˤBغf+qv}YS'vgέnt6orSmr9lIJUNܬvVMjx[H~:Dyf# ! V={ӮU):v^_ElaBm0'$K%:1*chuXǾ8}~֖nփ\s=S8J wˌŚT<0e_١OZ1Ob ڔ38*"\>V I+kbEF V|H7q#^N2Eu߁1W븃)Ŷn'=¾&6H݋3{Ѧ{)`9K_w-w=wa%woz3_@J`&IjUD] ?Zc~/a_ׅojI[qxbĕ?ӤqXiIi,ќ ͒ի;aδpZ0#Gb)oI]74]9*4%$*e=}Hϯka6ϫS #}^ǏPmooEMr?xVԳ3LjY()8{ÔGٌٜ3h`*GR]x $ծ-Y\o? H^&*)hB7}B>އޑ%?-u k5oPkgix _eJ\sx!Ru#ٿyٞX`2NfXL&X`Aɱv-ɰL~a˺,_m [ͺuNCC޺O?}|iaƷΞMr1@IKcoG'?|oߘ Alrrg_ɹGH7t4LXxq7g$b'[g%?=y/<:>:xyn"F]L.]rH3//l~h.\},///;:ٽȎ?E8o?~O'>~EAEXX^uA27r+䝹PriRR)WڻzWu $r-IjM.gGzO |2x%" vKAᷠT1 2Çg.C6KA)SւeX)Q\՜+E <qaKbң*e']ռΓ_}5T i>VޜMCoep+̣V>8[y3 !g%z&yzIʳS;՚=QNvȱ=ʱ}v$ r+~7Y'Ӂh:P];.uYq=^ol_ Ӂ|7-~S@JTsQˡei~n&<=^JG)Đj$BI5ᄇ9YY_xˬ8}8܈w/{{wtA!oۧY۔WgOwWp~/k/,r:Ɩ幸y+m0)1rJ6{dA'9\r9t<Ő [ &L:Ɯ%Qi.&G^q}NƂQa|EK4=:CX$ tBLM?u Ո@N(wd\c&WUo9!ePkMKH\f % rXdI''}b!7݈Rp\Ij8 KӜyOd(rXW?䡲H'0%Qe<HʳL+`@޷64WSzhm 73`O J3l2=s4sD1:zO%}6槹k~Zk+5CX`pƘz\jZp5•,s+ A?vs$ÕDڭ)sƶnƌGظb }v]~{MY-ww})dܻƴWW,^;lz<ۼ<;}dEC]GMp -iRs*2GqV~ǟ PK%=kp/+͡J-Xa>Pӛ\bOK".ٟ4%Y{iU{oK6͘]q4_F-9-jEkZ9Gz@W'TK4{&{L{S΍p/#2 #qG`C4c6Mh<뚇Rd&L{}d%1s6x[i|RZmaJ* C0!j>54,T( Ynp Usʂ(0;g遠}\^o?{X&-qL `+%s-,Z+?gf@J2J ~fz$+}$fsĊonĬnFM wyg]x6&k3%yMc؈r'/>Z-E%RQo!7tJQŏ2bjټC5$"'77@]U=%$::D{|Q]Zu;a:g]0dV9.L4REmq x=nh,*w0:-)B*8Gľ8eӪa`T`yU*m)@ ql.fd,qKYWEɸZB6C{.<;5`db% EiݘTFNjCu6 `j J W 2woْ^bִ~G28 nŎp TO@ٓ#eb -G jS|#V0̀߬e(Ph,vPPLQA 5c L+GۈF͠j`o]ڭ6&q3L la46H@ ʈLh @y4-8֐x"g#jM\,uc)Yࠤ`gLJMRJAt2J ׭`)Dd@:755Ts`(q 92jܢP5Dz\xgD)$h@PS)Dު+()9].Q^u@wwkN2**XDծ59j @!_T$Em"2*5-1HF TDۓC9hO 5^ uѡ'Ld3U;;ҋqslPFq|S1@Q٤vH'! }o3 vgا:,F]"`8Yɢ1SMtV -fm h3'i<*(vE ]"y[-jiPy! R$zGJ{.HPSIp_ U}QnMǣ(@epmBեU,TG՟GdH3bu6yrB l;d3#n qH_66dh|LhC*, t+7 0"P Py1 lc!Ud,MҦW\PxJZF% p;`ȜlUnαǴl,M@ +E`,GUijdzue=⬒%bl+',8p9ь?c}rT# PfRR?PJ<ޚ"RƂZfB[UvPb =B}ѢF?5x6E겱 !{nFˣMmfL;+SH` ƃl4E9!lMF6໋]Us[Z40l>2\T2'U}]o66*˪1[ZU9]yC,UCu.f#$jAT`H,. p\lIdܧ/]!wTfQq.ޒ`j?!o<W{qϮIsmۆmKpՖǿÝÓ&DsT?koG aMKu.2@>,&dd jqL I){yeH.$,ouUuNu=HY(f]sh!lM$1CpZ d!$T I3$Ty!@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H/ HY] Ր@ Wj^aV޿TC:GIf @H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H/ ~)YM$6ZU id'@%w% $4y\DI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BIo2šH t=$պZ{HRhH# H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! t>$WK%OFj'ei)u>m/_-z;u0,@`<6&pI)i=p J{p Tj9K_T= T!៫y TbՔrphҎ$¨0ȡCIlX Z4# ʆR92;ɅɚMoq[P6M~l ϛIZ?pP1蠰EWn/_FaD=i&|%0[L$,K9li> GxnT~rH@7)9XJM73C#D:9܃u)nJj"WZ↢֚ RSq F3fk+U=iN[@ >CQIڕ->Ȯ@XeUQ+z]J]}3vşYh-;]%XSGWɕ'~*-]=슣]Z"*__+ڮŮZ!nWEѮǮ kJ" ;*j_zx**B:CHQS2U+H-vUԪUQi0:GZi*+aU+e-vUw3+i 5U걫"WZ JnWE%chWghWZImEvU+ZenWEhWghWkT/񒶒  A;=-Z2x[#n2/FeW)oP!5<+perEgT&UE-SQ)q39K$5-LֳhU̓Vݮ@%gx[۱+̪VKl̉j/лNkWe'Z4ʭ{+] CRb^;kWE++]UkL-vj-}rk|Q.zKQOtUj+P]}{ իњAI=vUO}03j<*+lN'W ݮJvuv%TWdWEUkT-vj_|&x**bW+Ŵ"je*jU箊J9ڕS]`JH5vURZ]ݮJ\pve%}5E+3CA/ .or T'%I#uI=A.,iETD.*L^^%nVӊ"Xj 3~V丢۱+̪0L"'؊^r)=ԊO=/]ICJ@U|rn/Z쪨vUTJvuvŤ4芕dV]*jMbtuvUEWE!\-kTFK% J(4Ȯ@*ru5sWּr]]I"WV]vUTZvuv0"j S }?ݮʭ@+vһPr5B Uyx{[hj8E&f L3פ߄P;.Yb(يq1Ybky >Э'aK{4^ŷc"%T:>mC%Ynԇ1Bmț#_EZ}څfρ~|W~.|w/rΠ/7r{@smi?O*5ϗg ?Ln<]xwGw]?As7y~mK`JlS4'7wM3G|-?hq p}Umv|hq5?Jmofޅ_7'X iޗm^-?KqaݸO%nn/ẹNS>cԓy@"eN43-LeʬbN OD1eb7i獜͆RER鿗V:q[ť޾k1lgeAD֗_<;m4yw6c)v:v >~tAՖ6%WuC+{d,eiGoۛˏ?Ώbny /N!֗sGQ{AjǏ]uו.q{.:')Ƴe@!![&S .NRysy ZD?هZvAL]P9r{zwVnY}~F-~d׍`:RVanpqI~Oj:}]ZH( !Z|*ǾQև~Z,K/OV ~ _Mod^Ϸvfi{Vt8(n4䧝/AVܧB%lK:Y d,SALT )Lyұf.,.r_awG7ڗ~7ʭx3nFÄ|KblέN c,7;ш]㈓MYY$[y9&@9䊽P`]ՙ:Muۙom(a6jiBE& g tFPIQ"!%7Rf[r ^e++8O'-!10a)^ /Ҝ ΄rN>m6O|}H`KV_)/cl o02(1QBcJG'%8WҴw3vdSܟuwCx滀} eo,yʌ59DG-G9dۜқ`( Q(I,SbɘlPB .w+L TUQI`0KmԚ3LI\&B)*e' Ȯ39OGv)wo&(7phrpPRXyv.ߘrmzMϲ[Zћk(l"*53!(!3א]0DSQr$J)l ,c󮢷oBj*MLH\͠#hg`{y D #o[LgEi=ƍẤY#yAVt"X0cx#w5zU ^Xjjke5浳jYIPl 6,VrAFI-J!ڠ +0/1')40RRƹt["RzYPEVB[cBfk #3<芽3y=)DJ$d 4:ʉ񁗻< i ïۉ3et1{ȄA}TI:, ;S;e0396E=]Z {mKv<\e;#K%X-߲=X `[ݭ.vuŏ%pXe0Y8x,B;gg[5d%pu{8Qiqt}Wt8?M7.UQy"@ް+rJ0+!0?M,_3vCd" ln4iCdVy*TKRUQ:"ar=xvKm7JHl5T]9l2Jp`KȭdeayTi' CĊ&QdMaE EF̥Hj8t`<,{~HS%X"Q"D,&XH*'A9hDk hIOD Uq bCT"&J6fN5DmKt69K&pJG+#%m\u`89;dyY[3.:[%Oa`F8VBޕjCrlA֙RdB-l,Վr9VSaX>3F[*/|Gm]uԏn( 1cP ZWWkk+بuj 6*W[+(pJ7i}>ݿq^Vb:-"s˦.D(XTLc{fCysFMAlor2dy TL֓.ۀ)J;6Ffm*E>F\F@z$xm,d -dU&ε ƄhN>}(zq_hTM_K5=j5kKjT0qZ)8 Au5xY40#=iD[ х4`!_ؓ6GʃUʔ]BΒ1?z'D]Y=.d/?giP~Ϫhm zώeb0Ѽ͚yx9}4\EL]c1{XL] 3 $O^<cdzoG/z8nO͇V*wn{Xq-"xoʪFR3rEM \&>,ml:αpdƟzG>`/Hֳ$3ͧzIw:i?tGJG>Ng+[K(կO= {7Vg~O|<\xw&F$ bѬNWsk^qRt!.&wʳ?:*#ol\],? #{%3filzGGoըGkr*^h[6)[-D~I۸{z2_%ޒcY_h:=CY?ݟ~/?~?h?h? ,D1m#M~wpky C -m1C>z3򑷌{}}A{Dg kO~:hA-|oҶK)VQ]M\ʤl"d DZߢ7>٠ҋ 4 -ㇶonUHIy+oNr*_؂У b|iGA:@k)WW5Nq VhLp؈vŶ58왬c_U [>u&%(Q1;|}Auv`kt9~Ϋx[j4|HäudrSU6wSTjib-d#2=o,}a h|:ZNfZ/&{B澛g@Yls=E= 4@Ґkm]>/C\f͈V^gLMɧCMr]ӘD#3{o$RP{ կggO-F݃^vwK-hiGkq۳+w4{:=8¹uVuE^\툴^i٨u]̧AY:|p0],۱vsR}N]5T092M a5`-d|ng} T\bls*tڻj ۔ ު:bb+ N:wG|3^,Bf;];+n=mW#4^yκ+]Mֱ޲p]tXwϿeb@A1]B;Q*i j-L]ތ.o\C5Nty¦61-ek@0J>R9zȄ\3X-0m, bt:VC.8c΄M a 4`<2 f?9>ߡX㞾{+ov6+;ZjR:z?&͚AṘPM$! A|姢W|S,s-! 9@ӎ;@î9@I|P`g D5b9<1@GDךW$/P:(&USAhɁ1٘0*Z &N!o xbx0rFmvR括ꫧR9: W$cB,J2+#Ul7 , AvS20hC%N-XQ.>kQנB (|ʴ^71P`KP+YށPoB!jWf% Ie. 5.X}sT>͆1t3YqtjӾ9k"kS(m'9+s)ւeqMQv7hZΠbki-&wvҧ7@[^י;F:i^"\q gݧ_u8wڈX:PA퀂X晔b'!VNj5 z'bc#:cM4`A %nj-W| EW􊃭D) o|Ma S>M_Y}-tTA|ʘoKn|7]:Ά(SS (Щعl'xwG9|Q3y 3\^y^O}035;JZ/U1B &)9% *T4GZp_ўi)ΗL : %:Oo0r7p%wɋg[l ԕL}eNyU|W7ڞ~=_3xpU3«>l*WzxRO*-PLE(Jky(& 1RY [92#sՀ :1ͅ0I MUXEc͘lb6C'Gd[ =["gG]>apr~?lm|#nx/M/|5bL8ِZ>p~'JsS>h"%m>;3mn_a{Uyk$/ի̘&eQrbjĦ(5F[ }@݇棾w ˆx74nϩeOs砛4wwYRf2[{q/?*~#o>p{w_KY/ˏsU—Gކk+wL*TuIc2);f6R't'|gi6Zg=͍=^hm}*mYC+ :UYkheSթH=uxO]SsmOi};O0B58DmmcUC}nFUqPFJTMuLJ_چe]56Mkyi-C)!ZY{_dzu[vkQooMUns[9xM-i++(JY rȝGkA)OMÕĶCH%B+jW((K_ڊYjZ7j*es/UI=;ՓcnTͩrSBR(bj'sF!JGBjFZS+y?dcmM`\gs<^z慵=VUSmHiD.R޹9]ziݷq7(:vnRHcHoU_ΞxUC޶EԢ4ؠQdm-no(MWwhxUdF+73H^./x˳?ɏjt%bJ̒LFWծ;]=tuM^-cFi˹\vy B^3igGތ^|6f"\](7((m9+ěѫer*3F9G:xfkxtBo>8?cC~3zbY_|'W~ 05B8ȁm͟t=Ƥ;?T=:Dz02)eܛU%5늪mXLc tp킟+{,?}ͻYˋ cKT,_QbrRzAmmVRdgv9 >oΖ!]LѷL?p؎~Gzoߎ?NovWw>,Vw6澋c.15սͧBo^r;v&;VXMéI XVgbCSܯoB7# }mgyx|~W"aq] C&{_Ox0fTlrm-3'0 j{dUt;tIWSy#t]\ V i5+g=dks643VŲfHWLkQ*!J_t}jϪ*gmt(3nkq3Ct1댐?Zm69](ՔJzoN;d͌m'M :`j7fƲ(F"S[9  hOB>ܔnjtYT >+tе(J% [cR)?|y)}6<3hkMя-_؊M=hba`89hVZ)ey>r{; fݭnN{j]u MW\աEx#o|4"\d,"Zm]"Jcuu%HWz_BWh2 ZDWH6"]}MNO;CAF=|(5buCkeZac:cBQzo+m^`!U<+5Mh~(IWԕZ htE^DfX& QnNRbk!6hBA }DxkQdJ,n6tz:15"W^{c'Z!J0fsIHW \4"\:]1~P)U (LD<ŲFJ:(5+gp^u )&߭NNADKW( KW|աE^ 銀GWĢ+B+ @'] PWcj] oeuU*]6>t]~IW= _W e4"\wc7ھVFF)TuA>銁FW+Ǣ+&t]1¤ʀ1E+htŴZ+"juE^htŸc[WLSgpRuv$Џ%X9Kي|oN[a!-3E7sZ_n萶qoN[֩ߵ[Kq(zq QjVb8ti|8\Lc:Wzku,"Z:1LNFW+z ' Ȼ;s#7`#gnrGʭta]-&]R(&"]SL>WB,bVBCu%{s۾#:pGZBQ:HDmtD"`tWXtŴ2 S꤫!JIBF+"]13huŔ"jJ+qEWL1t]1qIWԕQuLoB2]1 "?ΔJ&] PW45 #x:k!ӊ;LR IWO+^ 4ٯ-dϨFiԮ mMp{Iݕ7r ӬP!r,p h \,ERn`kSD׫XtEӊq)U~>=O@+/llzBzj]u0ТGmtF+Ƶ>]17(5ؤJpBG+qQƢ+)uj] QW(i]D"`.]1pi]WLSj8Mqr5hbӪ[WLiRJYX ٍԺDi E.E]CD+6G!tÕ>]1)M NWDӺbZBSjt5@]I銀%htŸbuŔ[+mUu*])!]1p)U QWjBDb`JfiL,HEDbtE6 J]WLDu6JĞ3.hL3.ިS4[k6Ud˱~ NcuDb\ici70=S l7xp􊞷g2F3*CS Lz6{<@yuW'\Ge5܍2AdAW2jyQYETK9ht9&-G9֎ٜUUBu@sW LskV-Z(?_L-]i/I[/߼tN_2Q]n>>WmeU_Ve_W-F ʻ/էe}f!!Vk\yO{ s z@!^BL>PF9{e Aռ<>wf\:5̥55'\c% 4WFBQc! F7hJ̥_Q?ϼ[L~b$wJRЫw>,~r^W _~+lPAϳɬ[܈&cc*jZ:iTa 9hP{ +6b: E ]-ERB^=GTEQBXL^z1]Ip>jU4eU)[7PQOe.s<:ZIͺaհ)=Hn$]QKNqK>I5j'-ؓ9fqu4~1`ی U5 ҙܖ`/DmT/ *Z}ƥ/-@z^MbHuQCBNFG6Tق1K(mtl:ZaB BaGhݥ=з6*˂ *5ёO,ޢ Ә TTPtC[QZX/ h;oPR6lZJ2Tj%~YZ,X]A8&_X,Ľdݔ Xqah ;: nhJ}/+ʏ]\5 [(~-P6Pc.roPPCo*Ȧ5\baKp*Q Ӥl(`_a:\[e#t, |"Z}W:aKjYLb oDx,8h&`tE=5(!ȮdځE7BoTzdܡaS`juck!$ ( **ڕ"M4%ը |uZ X֠Dt(X(k! e 5P #o m${µ8b!z_P4y]gF\BwGw~ R5xlPBs|3(RAUvXNZ_r0g@(&=:XeR,zGpK`)`0/1 JWLd:[nmG㭐C@8/=.ld!:T?QFjΩN&PnJeDv*}I__^n..Sߟ>Y.Ru+`+tmm,z ԥ$~ ȋM}L! 5y.x WH%Lep7k ) bL#`QM+5h BؚbP>u)5Ģ-&ft aہarbDuN M@uj_ѡoCE7L ߡ0mQHitrQlFRnT,P~ xm,mT11mǀumlbEJ@:xMTAZQkԦ=t&1jj=ږz݃ -7\sA-4pQ@N^n֚&6X bd=pe6|_HhJ7a#UIzSi0jzc(u qLہ\ NCu5[.ՂʠviL  kEЬU|IؼN]Xv릀k5c4&Tyg'bQ!wԢ`|䧪< ՠL;W▫ۣaa/PiQW7?1Ac$_):">7MS/y_l>>b孤=`;h!/.D?{^~ i/oo>L1?lW7nn^zMx>Kͯ3^Nk||x>T:{}?k 7s\Z/ov3H<0BkC3To''8y@4I s0&I@_~$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$3%u͓\ImTG$ tI rHH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ tI 4@h$4I 5@@i$N1 M^@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IMy=Sȣy@k4I u@@@'$I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$>?Zoc`{%5/pݵv}{~W_vFiaHW>fRWlMS Gw(m: |4o/xޭO:_r^K@>eoŧGƴy~k^_/׸)=krx  ?82Ӱ󩩻%p klgggng|Y /=Sv ?a[Ew3nfQi-1e0>jOw͞k>?bns\ Ex!_Zn]!\Z~l~ /#]m=oSN }N9n;RO?}"t-ejK ɮ'~dғ&9E/N0r&kmY('yˣUdf7oYv?FeBU05z|BI)LgQOnEf؝Nf&+x\Yo<1 t+ʩJyR0O!7<σ:ECW|+:H[^S8<< n6tt(:A2ب&+&3bb:{t(= ] ])Lfõib :I"T 04tp-d. ] ]9d80yYڣWWN̴wŀW+7 ]1Zst(m:A )7]1+*BWQ[>Y귡+ >ځ7/PI7y^Zu].ϟ]A| 7/fcMKu,d]Ruq޶ZtCO)ȇv9ui:#)UrIͯh|23m"3N14&r4Aiu캁Qȓ $Dtcsm8z(~}Ի1L-(8]vb& ] ]g+"\RI(&.LH19){iՇ%XMw4!]7{ ){Ǘc֏N}"f/ g+c/3i&'~*BI7VRhsJ̛-ܝ(ޟ ݗۻ>aop|ᵬSR:.:IIK+dX ,ϻߗ< LHfe2Jx'LԸ;>a238}4qt3CW\crS"Q=;??\w5[jO{uu=>iV˵]ra'ڑ^?aa,3ĄAG U,|]z|9f;irz<|F8W#ce }<9Lw,cSySxoMg~܉ ͇_^/^}ͯ/HV`&"@owڵ547Z:К9قo3jsƽ.>V_}Aۍ Z ;M?觟8/9qvRkU-lIGlELuQS+TYjTo!02x/\fQh#;I$Z`IdF 0 "/ QhK!Kf avv8̷ϻb]Nɫ׋N\~i`7w-[wx3x1#CQ*ԙeFL %˝ 6(L$  d:| xb4ƨ(9HwRTl49$GA}KoP/VA/_'mLX ~8$t6dByGYGo1 ;z%7N.Ó/sDJ/.:i6&%x/?VCK`:qh/FwrQfhR bnzf6˴ #,pa=SgM*@i򻙫zGY[I;2WW/TڅZXi}N\l9)&9fQL2cc}ҶI-l)}^iGA<ٺXβHVjFxRwZLϣD)PPF)LQL:pK( -ȹ']ݐv/ztQ삊nʜڸ*WMji=ߗˏhIɲd_Okd zHolW]8Wt~Yl5+pVGGO# m{h*{dd7aZ;Igگ֌:\HoΏ$<ʺ!E4@kJɊKIrߞҁBʽT;As~΄+4B=s>M\U7ȦW߼? Bpz[/g`v n40^~vaT8j-hkIѮvi SFk{MW;SzX $g 51CJQTB霹VY&G{7ZziZ9Y^u3+>>yPNo W+f˶ {DPO ͂_-evѯm2&&>VVDۺROB (ddJrIT$VHI04^( H]@ɝwFnFN8S]t2ZKaM, >2{LVsp+l5 +,$,DtV%w!!^*a2PI.1̷m-2Gb5 >]PuBwޟ8m0`cdic1{%bGi K&,ԪsK֡))%JM99+&Ah$#@HS"EB9 ug"K2w7MX_PCUȺ?k}M]࿍Rۯ-{lW%ǀڬt7OM -ռ"wJqL7sX4UVT1'cQ%(w<`v-EwOp12Q"Ef +0ZXŢty$D10sed'PEfDɀ@6MLei *\AZeRQUѰg^|,PTJTyU zo_~e5|la"rмw}HgV|*C'Tswj>n=8M{6Tr_"UT+Y^UJIPQ#y+V챼b' Gr (Cp%A'smPKKig&n |EN/q}<(Bd΀ Z-Z#g6.yxB-ǤoŧAMA3< fW0K=j>ld Z*6rܰ*`0{M䑝ODNPYБ\і&ڥ=3>ѵ"kEDp=b ː":,%o39q,5 DeH7 #}BGsCjp[nT󷶡J.o@ V[3> [i?{WƑ BOR֝Ʈ=X[ǻNP(YrOV!.vgWg~euBsS<[EnN]!BO]gOI p itC/n8zvetf{Bfv~q7|Yc3_]Jn%ynzÉ cnu᮷dAUsuf[l(8?1N:!|gmϭ7 ^h3zkT}= vwC7O5͚iKh%q0"~*(4|¼t:;{$*!a9' 2ir_m5N݇Hԝr7jc9-fo&!]/LȗggrϓvƨYgWYJ|&Q [򎆎^Q'_TU%:ŭu˅ "D~Gg̭V|g*ktLt Q3Vinw((q}fVg}ֱH"EUzV!$sO'o̴܊d"T*AEH j1dq^,\QqXzU@h`ȯvux5ݸ;o_LM"oi[6( "/OZo_9h9 ,DP,dhi56 .G"9@{zWM'zwy'T,笜%XW@MrW21bfA\<$VҼ BG#JNKjJ, =YWk|Xpi|\4AA1c*ͅ0!o*怞$7z&cN:!Zl1nhs ҴCQ@Y.v HM*DsvlLzU&=w81\<)ɴMM˔?x~d_R<|i*Jdn b%U9?~ݤ5pXiW&%KI#E+.YU>Bڕ8Oh'Ywm%w%B<\zt: ]Pybͤe*J`0`k8x-,N>\M8_]`VG|tt<[.0wEZ"9x J%ikQO?Z[i]<SV\ hldrF3>%bm.lυ=w;;@K:*c@C4%`*}JozL58_imwm&w0o?Kմ61-UqJ `Xtlu ] }:ƻt6%ڷvc{C/Qˍ[-ײve ch;``͢dt^gָK6r4Vo*x5(' GtթOϞ$5c9@kHgldV&)U`=9Phvv⽗,Y`"c=$08.:'cXAUP/;H'mh:xcLmmJ7|yXޢTEz( ptA "{J+Y%McS8hrٓ'-meNГ$9<Ι5‘5J j#c5qnFz\Vӌcc=݅go*\lnΫ}hy7M>Nf=S8^i@FFSrHVWtFIP=G` |NMMВ.^ eFɷK9 `WĹd/]Ajڱ/jʨ-{TB#͎<)2*H"j/' JO&TҦQLL̐IH&&'Aq$/YET'jר&x:0]Aj/"ʈzDq{'L[x7pZOZ(IOAIR6$h4z}UD'%Y&<t<0@4F8d>ʈXMrJq gά]묦%⢮iPEWL > -Ypx3-if^n^W_|(exܟKEZ Le"VH;ɉz0em&ϧ~άB?:ҥ17t`5r$.J.=G7uz-z_)0.嚖l*0VyRS$2mpW+#ɑs5;x2\l# @.VV} 5V};q%;($:kî;>Z5\@߸Lצ5P6q7X;{2/1HtnQDoN\'CGzCe--UU \ ^$%,Ai GK޿{<*@O) R-ZR&0s>}r+LD0}MIWHրV P2a?Eɪ`13V]R)ҕV&-` p9m ]B:]껡+dCGۏ] vtjpCVB+Ж/VCInѕ\䞮XSp*1t* ]rv6Q0@`] '8?/V`dApP 0DcfHnr2ɛ0YQIodltZ ޻ԉY&kw%(wPqNA,Ar{^T%k%EAr G38uv <:Eb0$713A_/E+:VQ JhИ2F~>-&_ 'b*Ԏ{5 [D@c2]K_QX0BIYO Jgɫ;j=kzo6F]`hі[} #K/b&@`LKQn\P4)>DWXLs\֘%v[皬R#K6>D }u`ajpŖj5jK_[ B;y*]=]ݷ1g~]U+PS*Uj PJtJSQtA*n<.~nFpepQpMIOzh3)J 6eDTaRK2c3ECBkCm?~.$?/|T5g,`A< +25 #s[m8/ q\#AB;fe_x*?Э^uoEћTB;yzydkz~" g30py[30TCiq%Ņt5A^ȷu;WɰcJ88*[س>|1~ ~o+ö[-(M1~&Qy3uiKeh۴a0O8*l/`(YEc s)Gf^2Lk'rYfs%lRן'p&7B~ڥS]CdώG әٚ|m̷QfdNXwt&HV/A0~o&ןJ<1vA*Uۼ~z=Wwp]SvVWeYGʪڍhE7@R]WpcrdbbÅ7 'D Z|wm덵ܮMB*h$sV^w;U q! GvTMvԧ|sݠ⥲P&PTykk:<"), {ҚN'K$B;gh)L`3: yO8VBFgAl8DҒ Np2d4.@[WV#ݪ_JΠO}xJ6 ߸FV(a,)(8Q5 Ϙ:BwXn1 +^!I/>ٿ:mcM{nՄ5GJ9"mL4)%:@#.ijz/2?Q.rdzs=)|f76klIڤq:i<QOye3D w9TLSdTR[ZSRVqZ}s-V^grgtae)a"h>Wm u9C`xy U7dU* U"\Y\ {c U+ϵ^=)']a?q!| 6/ڬ )jd%Xs!ƓWUMz I=5F1cJ/lQb.`*[1{%JoZڐ^]۹x/SHxd*k Z| ]܌o^Q!.7 Te Ye31ɑsZ<33!3"6,7n!^ռhy/,u,.wtJDPvYkqvusa$eDoǣ~D>ˆd/ F)i8",UQR,0ٱ:q P uk]yL>B̊'jW%SM`Se AY4v!cP>āT:3*0.vvɕ'?F{":l$1(E7B!E޵F*%_J~9K"4ʓcP2{ O*L* xV+/Lr̢s~ '0g~Uy覩ksV zKf]Ksttoi #1Ҟ SY\?+nyUb:Ա!uZ^=Tg#INN;[\1CHBtdgzoeKvw㤋3449߻7ċ݂#vh ܒ76 .b}/|F{R k&O P8HzN͝7Hw7Hwn )oQuyO2,2|.4ʙӹB̙b;MHxF0V&ϬwV#-ܨ0Q'2"9*sIM9&ZÈ7z)cr7)%=ش/^Ai\9ބQ7[{GxGQ37jxa&-|}<@FCn.!JiT:;Yj#l Ș=#<#[İb fD3,5`rq!gĄ@;`~( 5'6J D 9&}vi`)vRR¡sR"!2,3Jrðdİ18KDe.3lgB^4d7;n|D]9zBu/攧{O~3]0N)2VxIVIP*Hz$bϙչpqߌ}ܞzRϞy7摂2+ (fD2B-aZ=*r PF9B93d`z<a ׍?;uOAh=zaForY52 qkf X1[sbaL+lON?^sc!@$6=kgw,rCzi$:(,(W2'٠9YKXB ݩ)'4:'"}ctψtS <6J U£pA8pk%s.T1'3T9ɿ\<1~%{ HZt}qÜXx2%<ue} f/08y-nuZO',a7AB|<(OCΉ턳Ac(r"eu` Ǵn,ƎƗ6G;] CgfIRhj4șLw`,e *22+ՒCYևew뿇YQhwߙևObcpvɋ7^Bä,%UiAUfAJ~c~<ċ͹ snϭw[ϱ|jT-wyhLJV]n-Et6#)NL"k_usp s AaL4WA'%X&ا)I?yC.oxpj7hnjK6i|Ɯ)A=0є{&7K:6^}1bQgZȑ"1Y|\f2`p|]l9cɓ8_X"Ӷt8rwYͮ*X;&iL%Y3A[l _ZMO> [-oռrPh<:zt3\9")jVtҲ$-l)cMdXl(e `2֨~öZt"Ih;AHwx>q<^i+"*!'Cl!&'%_)*B`6^W907]{kVo\q]SӾ[:~ߗ jǒmqG3Ts' AmQGĭd hUrK''`ߛU*5Mܾ&n*ݼ?Nϔ9޶Pt&BD0]VE+r@Ny>>`>nD:ޙݍ֔fvƦL`(T2ɓdH-YɆ*Be%_.E,:^PaJHJ T)ߥmlfl3W7B̈ݭK/”Skv.Wyj5,Of u꩛oBoTՍq'#dFX¢9,@`,Y y/$]ـf<6,>AeQ`s" dR2D(^[lML+)pt8PƼdFZגN@9/P jZIUc9kFΖrէo@̖[Nf܃,m#`A*L^PO7&sB$$-=R0@K[BO?0kK^V>$}RN*dUP\0Sj5Ziv4IH "rYJe)yY@ϫA%L %3L:yPM^E"4㉝"zo߭ x\u%4eB-$b12K ,l..zؚKQs_Tr^AiT< Q o'Ӄ #nO?:Jx mNnuhQ{.ZsΙ܉(0R2` &b IS-U8T$#L(PdӀ@ Oen[QنʤӮ7ޚY;bs% ALmC7@%N/|/oyuCGu %2̌!H2! XƀT)愙$1+UUݔuDx}#JOc )kPDJU&ښu6#gRFO0wOhMҟ22Em"dhLYgTT6x ^Z[}Eu* ҲTj$B@ȖZ49:fmcAmFAoN{G1Rq< {ތOv ]fOTɢ׷yyrױ[l'ol]ߩM;]/?]Zy>+'iZNA1HQcDceB5ˏmg]ɹDLE/z(aB9PF([iU`iԡjsflUfkapUBOA}7'*Mϧiv4]? ptth<}; B)a(ISM>z%l- 6Z], 4vyXXǩjSYk v}I{[hE;L%s_v3UkZ[Z{@S0))+-9;Gs "e$Ve/ҡ,A8jK#5[Ȍ f+:)rk2de:"QM.NXoF7aU1FljDX#A#qsgdz=^%UjSIC2hO:=:T#zBX6&:P9&TK;cbKSI &X#6#g\tt@n|B_#w-%QD]2.v|V3lh!n ;=x(2E~j&FƳIbl?׋R/iM3{zye`>y- Qek¤iS4զ1S( `$fakM^L5(YKc$׽|byNI:WKFZ7?޶5[qc ]asH,/C$PyZ)ql>+kG{Aw컅%fkKfL2aRh0K2_V:tEȢ`pI8?nFݢ=ʥ97K/^e4=$HH XtbR&VA3גּ9c.`HF L4˒ȭ=fXi=갥cz`kZuk"bx=Cf99S,bҒ2kg,^yKtt6A-ROͪbYǂ^UC &ytR: ,dഽ^33TX~ ؆%zCmFHp=H!Hob^I+*O%%OҰ4ҴUE$UWK$cR .  q9m Tr3ZFu !-JZ)h9!J 9>Jh"=*0faF'q~5%O0OpJųQ٤' F'xFTX,|?1w* !ѿ?_?Ot5,>{~<~py__=;?|Z\?M!x/8m ?=R=hztWWڻ~yٟlvqep%1%q<)G7os^r0M~9od5G4y~$sv$mx0~msY^ f:gb]O&Gˉ^9>6=Y?*ͣ.rݨkkVNc2R6>;9MgS_eT==_zS首y7OGi-_W?O?^ӫC x?^z:Q:5 ߎs ?_Ccy C͸#KqB8^z=xj2L-&[ Q^~f1?ӑgT9lܫd XS݅ڽF**;ImvI+4,D6V_SZEJ^T,NoRu|G۞d()MLTH؀41( cvUtQCCvhRpi3iފ'vm/vuwwik5jycuwد>uƼ<.Һt> vZ EE˖rJ0n~:HQT!8y3 /lx٧@`CA[xmG:f“AiO)zrҩDuVlkj=mm7$ Uc/7G/jkdzо$^"NsIҫxlsc|We[/nXhz6u~ޱ1o}rZs;t|<2?G?{WF ᗙNIyE^gn`1vc^򈔸H_Y'|yXNg(d\is{nrAe59JRYleVI)Cdҥ\ ) Lo=Ɛ{񍳇6M3 Ȋno\ dC#\U.y pkBFIUJɸ@4r%W ko`4Mr-ٛ^ۦOQ{;a?NN[.4ׂ}n v>l{pK ԓ!-b؋ Pִ~hZQ'}ꂌ4Ǩ\\ *PCT"k,u{cn(JUFݹ2Y,y3 euS91ԚTyA'ܐucJ`frΦpЇH&R(E|-3qks5MO̅;0e6\kyЅj/bYsl:P@Ȼ#m~]nmخYi漘OMZ* ܊Lo* w0w/ѹ%:=-D/  sS@/a[ XzKyF&%S9+9}qfs:餵D^Zds18tL!22PI2kytX"d􌑞ł :yKdmhRV PvY C4fe yGBk2Rα,}F(g8}\$Bq]5+v%\b^Dl CR3yB@ "ר }v_ ͳ zAV۸j1%tO㘻V[rAF|*_cD%s҆$[rй;ttvA-Agc.eZDһ Cka f 00s-WEfAGI8 Z'U &yE<e@!D]9?TRu:EYWʙ|yC<_}݀+ּjgPܒ;Pn?cCb7>Pr[p*iJQ4Rx͕L0{ 3λaYCM9Dф\vR )(Al)2&HLC90P } 0Ӎe.;;~q{`8`w9=oAֻl=7FZzbLpms?Y/dί,)Bk,lڲî--Y;T ^+aW*jAuyRC_V0']WDp+BMsU\/^$WDvgU!W]WZ0Jy(/𤬻d.(׹5'Mh&qM_n-f%VJEU]7[R ,ȼػ[ES%4wkĭHupixvFMg1uLz{U_~Kx E$K#PEktByx"/~țt:ʖ)կӲrWʮZg,8Ñ$菃S)ڭM^o|2L'* ނ|௃D+wBa Tp g{+PNXKm,N`vǒ++\A-9Rr[rВ+.9 * ~Bb1^\q4rU!ظW\3}]']WJ{q2ĕyKK_cV/lE\=`6Ճr>uJq`*KPۋM[gշXqE/֮BqUUW^\Nq,'㣲Np-bM𛒇|$u>, ?W'utO.Ͼ H[yQ+5kzB6yl7yԸ`zl ʞ~m_/oLט=jqL3&o߬3Bڋ&ܱX!]_z9:Mц@'&44WAd*E#6 a>pII+7 ~wufiNi&M.d:0j,6S9 :qϳ h\& pP`CczjShSb\Uen.c\㫓T㋀C||9+ac_H'+e O$Rش}i42j(=PaۭHc~+_V_W_Jm]TS]''ihگP}&hc&hmW< aHKp!Mڨ<~o[^꺽vp3]ZJg1Rd3Rd1q%x:g*dIVq 9kV;X{0W}Nz^6w}+޸2jzWtz[zR2[zk96J$*%VHI0ǽfGyOSPR@ JN[wF3D%7VXbSm2R\J ёg\di9!z̡Dc_PjguQI!xQ5g4ߝ;~^cȑ,<ΐ]Zg 1;%Ҵ6JYJa,*sC֡))%^ڕ8e>@$/G7(E(Zfȹk.k}t6g 2x{_BҼᜬfW]r eeu-]d*:n5DRT@bōfVA$ViBXq[j:tΊP[>1)$]ȐɂNDQ J.lV˰48@ӓ܀$hS}{;ΥinyZ`݉WKlò,}"vX,y)vJE|ȂXB2aY̲4"D9E:(Oi;]-a) QbK)r:YS:!GOZBhq3P~<#$WYr`&e'Ց&lBp!iNK+ۿڹܞ#Si=8oeHȠOBFmP]|`=#HM2w!'7K GOZn}cDF``%SVrD/3]^,U8:8ϿF\Hg'N?|'ibF|\xr0Rgpas$I2s~M.|Ŀ-&v5̮>pgM/7ɣ%ȣ8($4碤w=n(N&8㒉3wD&$vxN]\<WMWrtX<LdP+uz&d.Q Ź8_>aV*-2d~ZLgfo%۷gUFuPh/urfquQ+ǤBJM2Ҙw&Z1s}ǫ61$%sGdxt<[vAB[me`upG=o4GoRMMͯQ1h\(2EG'f}qA[A6qN13EqGʋu̫ _yP<|Ɠ3[r|K5'_6M.옶>O?+?x_>|\Ã98|uD3pD0609~ xn=նWfVjmz:W9fײ6hq[k෫?OËI{YO=a˵$xX@Wף4HGlby8ݠʀ׳OS-B,à@Sp]P!w NiMO_N79w 4*m iif  PeE舐1HF6-::뮅:3۝Phy1mAfV"z]֌_gWQ,Jd00wm#Y_!ia\wv/v۝ &cuJdegAK8bXN' xfNyAY \#51D"b IJ`ю^g Vd/zHCTIP#Җj `LM&D/fD$2f`>OZm'Nya#p.pb>Dj8ᯕW{k\܀Po:$Gb6S;ǜa:gYf=q1ÌN&`1$ցTWCs?y )wD缢`t2xPւ X0 :c 4<=J;F7t1h0>9eZFJf=@i/*iX LܢKLpY%YxLDɤԒDœ }x> Y|ۏ]?ݎܴB]>lq܌٩NXN੄KZ((I>2']b"J&Hg$&-ՠR8u*3΂@Ԣh&XH.Rg&EKfLz-2UBMk@Wq7lڔu %Bk,[tZܢW i(y@j.t`b9,7"1@ vu?^<Eͺ)A ."\DPCt% =Nki:!"(X"*"*^;%% "d#A3zȲ踷9r v 3ĥ͵1 ꀊA:#KdpzY4Af z!Hy#1Xm|jTm0q<< o{}{bT baQi,@I ;'sE`QB2TEJ)wrUN:Km#:Cbyf0=anU6}]gfx3@<.k[֯?EՋږz?~9Kcx2_܇`N|p_ϾWޠ0~JGKI+DL( >ž\ۯ^ދ?-M]Aݬ?_"Cm-&]n^kxE^/˟V5ceOfĮlXJo֢|EM\l>;߽h|(gM"+pM\e8xYn,.®I=ǹ?~a"{r6?e:ZށtZsSZLPEJϜJX"ОiGK=IKh!X=54%]M#RtQ@P Id<%3H#ޅruuj*:Q,lRڱ FAwWNK JQ(1b|:k҄XaQ3O$lVߥi/~ϼxjN/_܂tN>m=ܯ|Ώp:Ȫ9E}ONa1 ܹhTxE+$^m*Cbs-^nVb,MGqiiYP*=CVr)qڳg]A!u1f| vѠE(h\5xH8kv\^ r%BZ#2ClPI>9aqe舚m wYfHmo8=*+N@qu:.qھ}zݢ[ gJtv} f|E'Ymvnݒ-qkWuIv&r+i Y66[aOg`ݦc)KQGϟGrN;&]m_zBYr[wvޚrݓ/M;|eCB[]o|]< 8b/#/cٍZZo㯆ji,v91#ztN."ztxa88ct_:f=X=uXQRZh"K+)DDrSnv7ϛƃѣ*>ٴ֔0*ޮ2v!k h[d_QE\L H!N'T\ * F9xlr>CCdևR~&bMXp?.V㶣:vUѸq>}ǩ|=9˂e"ko1 r]@%.@>  P 1k4E D"佼ƃ`*(u1xn$ PD@rQވ [A W>c"j(PZs&99S2;GR h,YAg ١ n%~ݔ_yH` 6Vq=l\`CM[gz^܅8@" Yg\+0\o@,oצ&ٲ-Ikٝ8Ne4?,E~wti2훼R7ٗ=\Rm^Xb=tX4zC]b͋jN?u1@T!U-P~7ޖp:PI $`m)@,rPhXѐգa錹-P0|*5!+Xr& ̰ xv'P"5xAbU2o5" TzDP:eBQVX!-f"潭Kc--a'+uy^& 9|ޔfl[~P;_0 bDxUq LE敪\8/d*;AK*1YJS&v s{{?YDe`=rp\Q2c&!׆q,1E)oR' RRƹ$)N HJQ0sNJvFP#{>n} r}TooO3wnuֈg!8@'>(?ir .R9vu8RFP\πe¡ʍt:ϴ[s'^X)zUÿ\AZ޻Υ7$_X)M% |%B!)4#d԰f]%V$ˈ|޵RZLD|i詹ψuD%Ţ()&hr&xh,2澂a;D# \]f_{"_\l xrr~fb'h(ⳡFZAB_31kk%?&+#hdVQz}:6ٺlEo^vC# Um6Ώdv,9;[)T=uKmڲP{`;_9G3[lEͫRCbMo$L#8FFE3d Q4\TȖ*l5pTF 1f dMqd&a]:Bc_"h".D3&Ml4J.8^ZXo0-4_7&%BqO ͍ЬC.NuΤuG5{̚I8%S2 ۱Y<&W(V9'l}αyُ7m^狡t8?̻=cZl\z=y0{WiiO3푞lKԧxyRw}U_v'9z9$X+쳐Ӥ6FJMBVS`) i} 켻I_z Vaj(ږn mooMǰ|J}tM! )v-P" `br [ElK 92O-'nɐ:^@0JaFOa#(7(r&+ &E$s"D3nQuP6'+Oʛ⦼IE঳i/-ʱgSؽj8csv&2A_,ҌhO@Ì]doLK͢vs㸧K.[sc߻%+s j/]2$Z(<|lVC:eSyf#'krƽGc͈#| -Inrcnq7;2_l3]Q_5!΄).B. 55dl*بSf F^e=w6aM5Hg>Q!64Lt57@Wo4,s4c\ ʚooDD}<Ӡ =mF3}Ϗ|7[mŶ&h&Ľ]ȑ!iwxej %&ҲOH$^c$<сw{_>_fDc4x~yHLqLYf?A\>D>!I>{\_dLRِ-khCV9Z ~atRCoBCq81LD RhD :汏MÑ2lZ˴?9fߑ&f}:]͏]js t}A0ϷN:_+`99[1BԏO5,9G`RC}-;^Go?jjq]U2a %* T;QķK~s&x0Û`9˵9uf7u~9%dȵ`g+ 悫e*-zkW] SUWɼ%J\0_ IowU]Ӓ>I$?WZQWJQTֽھ|[k:G˵#jQ~Y /\|9ݿ.]?fD~P{k//e-"C(_[C*?;ѕNa7 ^  aFRl3v: +U)\@\i][Ukg3JU:\+k W]pؕʵ0誫=yR%z+MNr`2u{`oGSmAvdVCkPnk 4A.\↮L>*ݒĸ[}yN2*W]1sUW&ֶ ^ Hk`6jy Uډ +޲8w.r-wMTp mzan \sCJW/WpT Up0y\W/WmzBv'+ƛsjW+]]''JG-jV?£_Q^'_N{5:ע6guU՟+?tzCz?Y1ԯ>?Vopd<;WcY_,k߯I@-\Z )DTg+& @ XLҭ$($+C ɡV! mdȸR\9(6}ZFC4W~N\ڠA@68DۚHABH-YE+6ȋS5֪J|j-JJ(R% ƖbL.ݨVs,r>{㩆Tl5crI5TS)LUh8i$e`*J-I5-VsnaKI(֨3k36"jc6mкᔩ4\.KC)F8_{,՘%^W(kPd,N C"L*6LsNBDE(Դ0M)Wmh*ñ]ˡ(Mm~_[7A3` uPAx^~8-2xcFʆ-[t ʓR2hOIBݺv~Ժ&R 2Mj+]0cRM"$j!!KR>1jK%MiP(v^y~1s֡4\ b] A=7FVb&U+FV5 !A=VRw) A S 2ѸUMb* J1ŊKڷx`P(*TjlƲd2*zbJe5͐jPoB+"X2nP(SPֽP CyLBB("2m z;clQȌ<BUFݚ 84iA#aP3RTYo"棊1͋BIJBm~Ui ]_˚]ւ[UkW f=f/C&}6}`-$ >:%@uP< *Yd :!JrIJ2Ba1%OpHv9k=/sT(39՜ HDNW2RȬLktq?X1FP&>yt_V 5ȤuՅ:@ 7lXTuJrA#+PZDbYьmC5YK1Z1ة`U0tϯ93di"YrO&XV.B&ȈG!A]Jr 8j"!%:]%@_>cLG>k ) 1E"pD"tQ:fGŧkl:G% EЅJL T "Q8TDYUQõP2,,{njpM"β@ A16%VmZ4>Vң,I£j4ZIP*HxޢZM/-zrDu,e4G-w$a:J` ئQ}AgAKCj)M.ŰnD+YLw]zʠUbF =>A(AzƮXB?+Bq&SurSp- wHG|AW F-LtB)* jf,(*Fb1wv\zL` t@ QY:Cɗ U{&dd, 6#TK.dP?uN^Y%qA*DEPxZUwUY[Tڠ@~oA>&ZS0h¦A3 _ 2)/$čtro*ˣ֞ +JOQ!,iJ ]\1Fn5 q?XoC1jEro˥!DL0r( ;fAj^\* i?uN Zf !Kk|A9oU ]шw9`j?,Џn-n^.כڰD0;(ؙ>U{l!S̻%gQ>dN/ѓ]w}к[{^Ң]ַj;뮯ZZ^4ZbK|{ ^LZhHqx's~mG[[ƄP|>y(:ɲԐzZsZ8d9gF?Ymm#w0ǘN~\8z}KiDmrNEz+:z=܋v5_> m:yeEnV9×жٽnɋ4C[ɰT}+M;Gmv&BO7@Q ? Q$/7\Jp WFp Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wҍp录a<+5f4+Zq`Op Wp!"pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WlbjofL+6v<+q4+ W@ JK-†+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WlbpG6 Wj;5'+Bi-J+h6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņu0>`ɣ' ZK\ mp[xVW{äP}F1@ڌihaHZ?fC5F{!Ո;="ގmNW@)bzte~L{#RW׌}OhɿP0LW>#+lXtՃ+2$1]"^n0c+ZR*̩&]"_gsw-aҢ!/t{f@׋zpL,Two,Xֳ:V.Zy֭о:Kɸa;hW0w4 @݄Bnִz(47ճ%q7rn_47lt'Qʒs׫:E+!d[$wVZ~EvMf`1}16vg/lXYݫUv<1?#s͐k)"v戩IkPʾ/e$Ko `{isgRt]L6E㿀QᮘY7M: Ztv;javK %6h:)3vj5w&O~rٵbNN4S#J2QF۟`aDGS\Rmn <"kXu w ^1g{_="/~S0]}Ah=8H/NW ѕġg"!V>FW=/< }3? =1ЕeܡGDW}iapm ]ZNeLWT_N{1xBW@ɫ+B+2cRW ]\BW֫S+B+MfDtExAf,tEh9u";Oez8tE*]n4tEph]?Zk9]J ]y!S1Hx pW}UP3 X^,sPO#]@Cìk:e5c|/aܻs_tPZk9bO2S2*:PpSV67*n/sYd\>~~n|wCay{+Vove؋?.0~^DA<^m/p4} >pή: zRQ)B>1?~gKbrh&_6r9[WnO!-3r x_lccOn̞k⿻9L|Y.Lti6S!HY5DI:7o~+9DSjYT!o3y146ջm !X7{Il yq=w4h;ٺ,u.%hy,Y)K4hɛF'>{}|~|լ[gkPzk ɨ>fJTA?{WF Od"`\ Ȯϸ Fڴ%R&)T %RP4Ք({>Ȗ8͞zy:+C"kp%sŀH ^V=q$ Pc8/PSA%\V<1 @2Eԉr j`n*r W]`m/WӳZz]nϔHa?d&|<"3|/?FʞSeSSzufԊu; 0gI/[VV̮.>^5DAP2 dgr{*:˨ԴO9 %\u4JogJE榲LJ0*C1N~|ك5H:F"fӌ8᝝7'dROՐd c&1\-JŸZd :ɧ1e舞|f/v>YMۺM:<^dv!:^۩#N*dc;lpo;ClϺz} #w9ӬW0n1:\?r%\Dn$mL]ufsfym<r3Y GϟGWrgwLz5fsd6f=!pdrnZ[3_{6K#7t20@3ofl'c07/x͓_[juC ;wJ;v;q~c6&KRUzW7V fK2PiS | G bwS܍x܍\6 GB~)z~q}e<5(gZ'-bR@ q:"QQ0d1x wH8# D"ƅh`*QY5={8~/q>|MP"=v`+;Ā6q޷i=%j*4mzp9ml6 qAp3$,ەrK̆gx7iY&/k,~}qS ޜŋgE b*4&PP\q#Iu< ?O_'9_Oh;.? _q;9*x=i.9m+k?7 5&!AD=)VB¡vϗchew89yҀ\SOZش 6un 7]ْMb:sv?⿦t@!o_{S 7Ū|jc6Ԯ T<L5f{;bi>R5KEipIV?uyӪV{rC;)˛Փ퍖 gO~㻲|8DžTh#a"Sh0[d i&b{xkB >^I8;8?ڟm>m H.F. AN c,UYQ meUIY$t;+r(!CBА<чsNtޙڍ:؝Z*ﵴI™#KiI 5,0#m2(9i8:N{eLsdS&rE` J 4*G/Fݸ*Ѓp'][I*$ 4qI 7SĒ^rI*&!cy rTQs): ylRX=y/Dۢrf ҡg EFFK-!$0Ѧ$ &QEN,rV-嬖O_-P60j,|YF@K- aP$7} =0gBF%p=)K/%EkVQoEȉ24T[;֓q=є# ˂"PO+=,E E4`RAPJZ) u 9B8(N,;e'Fx5߽ x^)%iuuy#LD<󼿟c"iו7 #"1*㋞c]> 6(6e&DઍQR rJ8!xT:7A8ދG9cW D|)U:tg}P !U=s诶K;SPj P }$R@d/hGW"{>IXU(Shy8i- }33r vʝFJG?vS4N: e>h11>bdј+І'"&ŐE!Q'NsUilWqk˯>t}$}QZA$ ՚9r'AaYjl to޾Eע}ڙ/yfߴ28]֢ؕl5o˿1jɿiٽɋL'Ӻ?jĶ{5/<}ɟC^<{~9O9t2ri?Q+~]]ޏJɮ}8G)Sp}ua-g[!{g w?A6h\uTvK= ijY[+5@KPV]?p:~:h&W^=7XA23?g ߿qИ Mo= 7.'+R#zy:hqŸ$boᘦSdz4:gc7 8X+ ZZAU^ K\c9k0aCaCãaLT C`I140: ObBЙGmH\gO=϶0#yj-SC!ьKn zjN59[༝ցbhlUSOv=‚>‚:^MF]=+om@[1x5f `}wj\4!Zxp3RIPRFwmm{3R_ !xOl^6 aZ1MjE)?Ë(#k(8&9dT}]_uu5w8t% @[F1P„,GQ@ t`k40Hus72vtiƾmv RΫg#za|y:wO&L&x~;* ˮ&HZk>DMdVid/0Xǩ%0S~\F5#:=^L15ҁhLDEb`pM`g=mDQr5ݱ!ڗ^%Ra1Id<9Ig5@Cq2c@$Y&$Z[LeU.feɞgYk4Q7 gpcZanbAk:4CU"g E* s.2NwWZ /<Mk^ݵe{uNz uRH96 $WJI"(!h[-ݧz!CU(V#ҢLMO8;M&2g}Gęd 8Rۘf'x]rPgQio4+d$IR.@DT:r:B Yf X cCzT`Q2-Z;)%sBr%%I*aYaQ FgAB;XT;(6%.hϿ\GpZ$/B ʩi~Hfm7*tJ$e'^Ta~כ{󧛡'm̥VǏPmYT4;p@"J :8GLy4AS|91~:o (֬|! A1P=Fa]2@!n2NڰC"U,dʣ0E:H_F',~z޼9isnHXզ?W =NhbHhHnӮaa9̦†i:k1gb]gɗD/vsvJ>&w^n=ie#u`i?}>g* >r4[oXQ}ۏoOϿ/Ow| ?=:8 .KJ|+1CcWCx5{ mewk4g/%^[[OyY{_V{Gl\5UzGyӢx>J˟h5r6ҧB8G`^黺 nUrăx|$EjVb)gBke@Բ8uayMk=1C58#YJڐ-8T H6{N.!D<4ipd_W];Ӊ1;{ߑM\-nՐa3uRh,v]t%cJbcM$ Mf>hifCmhEb@(˝H Jd4E} F& \ʓ!vDabF/< dёQ 0L*({)!bOW'ݨ2Z]r[ OhlK~:NfY`ᛶ 3 sqg+{=H\єYz=LZ鞧äT_==`G$y22h+k>؄`яPF [hֱ1S\71n r_CN~Y|G˥:VEZwWLlӥHkX%jݨ:Znj pvwIW4[ 52>/>bϛ+Ž[Nۦkw~pcgH~61B~rV0֍I?Y˫GC0`V2H/uyS]ޔ]Iۣ.]P!DPg^ZZGEx%:1*4VL Y0!:c%L=NQPQP, qHg3f?9}ңe>rq\OW1vuXmξtxjZ;I ;eM!e>is=k!!CH9GEd>A%{N+ˆؠFt=IZ7w&FS^ 0j>ej~s;?4b6Wx\3rޏ +S/:*عU?#)72)>0܏F=JWQ$71[-9BsHFja ٨A[CQJtt֟LN"3s6B[2&Y%5+JP٘5N FDc%cxA:!x,AQEʢm%>3bX:6y Ei,N@ {mbd>(:VJaAH}Iq[5g+ FJ.PҰǢý;)F[ }9Q(8Th S=y2U^])Hzb sC<0)ce0ƵJ~YG9ֈ@#lB6I|PKqCtnMUz{,":7>ؠkD"fJbmN FNYYR: -Îލ!w9?5|QqQO_\PsqB-f Ηe1h>A\lF8 ?`'ثJW:|1q_V9uyH4QU;]2؆єyΩF Lx9eEOVx׮{TjHFOő5 "`eG$dbҩۗD9)p)v>|Nv>CKQ22the%;v:no{2S_^}8`֦\ ұUx 3-$+ZRh兹AT^ DftAd)UH UPz!k^\lddI XqP_cI:yww^oˁ :`$ToDQz4D,P+L佢҅m=C!6 ی# 15iZ75ZՌdbJyؐ=bn ZMOmq*0ؽ)]Ž)J}/]Tv( { %'w]{?isqw{a+mv1&, {1 aB4Uw")?{G}"L|QZ9E1G֩ࢦNKuV蒒C BGZtRȧKj6CWN܄U=jt_dj߮Q12(=l$RƢ YeKɋKjQ Z{.dn%/m򵇑20$oSK$ &tN)@6Dz;+q~ Y?[zuڣnC yaJ֬= /ߗ@XWª=V4n@V|0xwp[t{W,.HҚzpe{WU`?یU\XZDJiWbZ\J\Uqb[zW@Kh%Z/pU{WUJ?Į^#\9mP ګzoJ pUD=k+wX'k>1@^RF|p4Ad*N]ei%?vRj++-={뗣 W*8*]cYq\ӷ(@:!Gݥ n 9 {Q䍐R % ݴA~|.kk|pqݻspV%Rp]!Q!cSLˌFgNo= \)Vwݿw i\; J]L9BR4Q%$5>RʕIHL$$|3W5q>Miiq.S8Ld tFYu:6A6AKQh)  +w ^Ə5To绿A/٠OHDFF3KOFLRF|'hzػv+X1eq \ei=SQJ #k+${wt\m&yK 6Rp⅔ͤGv0lWնCOIqJp+&Nd Մ;\e)ijrFY/*r)bh Vɩގ9YX9k+Ui>nV$&@h" +)"!) n39gPW5X<A~ǿ=YH?nNoDoi6N_ L<; `)愳yBXxd,EB6**3kgm.;DepgMu ӛVmz[So&9߾٢C\JDī̛0^v =WCpj$Pcm! (B»|TM=e gMؾ2S9(ab2$Fpb ExeHm 1Ska,3x*Q:5.+d-VPSkgu .Sa]`x=r)U ^E)(]&4 %Vc1*@M-s )K$EMntJ;jVם̕ĽiiR)31>H)b0L<:vpՇ(z+M}X:1{.Yτ o}|pw妢x W nA7lU,+;M9TmVqhl1Y5KPWu23%7vRTB;1a(pJϲ :WʞyRѣ]Ň#EY7U,"\7g#œUf;n,Rp vxW\cF;+rk}| +E רsޔ_dY_إgk T2k,]1YY^uruǧ-XUsb\zy؝rEe 7 d%l]e«+QLa/oteyՓTIi=u[]VLiEWͯz]vݘ0& :dS۬ )A"QE2:!I8DTS QQ0ʍ&ct"L0ǃRL B$NhٻDQ -H!Iucj>M;,: HU1,I`4-09G|:` kosRs]fDgve*?OEB/Aڥ_vfmoD\~7V>K~}BZ$8o\%5~GX~ l]^~CfO?gОYaS?Zi_6TƲ_Qmv0&Z1pſ{/fU+:YZ` 8mmYv6vV>@>y_֔"lhW8SHǯc!tc0?|ɿZZo߶P htq` AS⠽#Exu7êӻu[']]oG{С˵͍Geyy6YLu6oyMꄯOx(kG6)e 1fL;r5r|7Qs 2%-=饍CVs)BuYMʰm%:dZ@'0DB-ME-ΘOBdց[&7=|<tt4_O8A7TUU 7:k;9]}Џ}Sleu܃Q@<>dÝCV( WB39D~ MpT 5, {ȟOZS]D<[pa .V9X#Mgmo$aLM{(Ϟ]8,D'jjVx  a+u@bSqsl8Vj *M`I8A|!H H1k`\(K(%Z2ΥH餍I$BzFPߢ|biaHOuYM11$؎!-l҆O ~4ߒ; n 6煨"O}BgGw~p=3"_F"Jᔑ< y>1Q$1}Ԃ#D i u"U; <'UGph%@MFZ8jVbȟdU?ɶaAchxW/t i,S{KrR+*q \2!Zxˑpp΁T(## cbs|i5))P7S=x@9q,ם1((lBkg52UjfƩP7XfRn23^T8~X9M/Z\myW?{F_^iqꅄVٛh7Q4~Tb I{608ґ~go/p82LΞBz&%3LQ 䊜LD.8@Q)tАEM%Gۭz"4gϓ2SԦ&j.L. pN爙Ec芜5vsWvgܱֆ6ZGwFbg=qv $1K`L:_H=3;ՇYiS*(!2@^thH$dQ 6j)"9hXvFqgaE1F?ՈczkĻ=`ڱ +jIX$=E͈4*"Z,F`ti]ֆd@gB Fɓ`7:ֈBXɬ.uv%Eݱ^Խ^]OThQZF%yMBCb!MY'BeM]чqǺtl vO~c#] | eFD;~Ny㴝q6@諾_1OW;|r^`k/9{MG^LSQ“6|<~|S OaZ˟D-0YjgeF%R H.t׌R9k9xУ4}{m|O @:Дs@RdHn饋˯m 3{;iiV]V-$K膩|u{1U7 0PU鞳ѤU4@˪-q޶JrB+]f`"n.sRK"˧ju?ZMkL Nը?.(La풌2YQڛ+y-R^vQjd>-Y^lBld >Tz' pZx)K61>3Ոx_NeV)/#G~Cz]qr6R&3BE.#9`R NEz||&y܄^Zy.@ۯlgv!Hu0ɬ"*x!J_{UdFawTW`v9sٝ˵hYWvz<č]+(Q> ŤLF_ +(fV$f5^ZrȚt9 3Dm/rDXINsH_:3rVH-#3YP=ȕ:/KkBgמ|RТtX2CXtc7*~)qc/oP1BFm.ƳdPL%Vy6g9ې38:N % ? e.(g>(r,rm֞i1*iߩyFt!v#Rn0A8a$tfE4V뀒y';YQ ٬J,ԽAYϽTRxQXÌunD1:S[X&d^Oz=etK 8(ˁj9%YZAx6O n 9 ,&TڥIiXG S0*dF+:0W( ڕ@xb@ŕjϟ |?^L&e!2hU6Id*c̢,<6lWI\̦VDSK,;S3YWl7+Ƿ A䠁T{n\«JՖ! *h(ًGw1c;U?3YW:\~EI1?%usI=˂ܟvHY@Bl` KoX+,wZxvV!iчѷ˗8厦ta].Mtn)8m<}ZL_NZ k{t%t|لX~nǏ5)xVMNt+ Y- 5x #\>(L6=v#Z'q;Eb\w6ЖAekQ D\?oLU'~\>ٯEcu=0ù s_9g˔YH'KW2E5K1 J' n;1%0Y쳁0R($)d$p)`RV>3r9OJ){\ ̃e'/}!%jEwI_;\_Ȼ`}]o}-}יi%pώIgkn4![X zcEd:o^!/nǤht[dNDQJ.leV"IGn$hSW}e,K`pȅ`IC@~/ꄂh"99n9+:smC ?^ݧ%eη!K^J!5CDDc²e$b\ќ"tx̑):;;,w( z@V4qȄ-TtpnH"Г H}W}<6 EG6An WZMSd&J lBpBNrh.I޳Vr1i^i#>-^ q"+ͽE"\F}2oՄV RǂVMNn\O%U. ʠbJX Lyʍ6b5sѿ+ }Ɍj*]bFGedzjiO4I?y|X.*0#aY{Tn\ji\2_j{y<6h*Um`-RxR}&?sK@Ya3.ߓ#Id'ax?9#%xCWOo5CY9},+ARo'+ A;4+TE2%٣_W3N>-$Ղޠl Ǖ!gϿլr.?'[L_Bsizu^^f̋ijϯĐzN̉:vn+ $jWgM~4!G4GKGuH'ˆˇt1h(2ŒGËDO. z9|TuF]:)K82"4GOH媬}^=zn _}жi C+[O'|qW^2M1R5 mk#@R^}6J?&]̪4ejW<(Q"m9Tq]('/g#FXe]+#-ofS'Ό@_jrt(kiBB\R(ѻ!ctRìKSl0f*o ŽWɢ(Uf٨(biG ^p&í &L$ ?4J#lɾNU'Huvw}Uk/]ynU}vBw証L;)TxZncrE>!eY?\yèuY샨\4iFd3J0Vm1 㶎ĺq@{?Mb@K}Fҹ$tzb(k[sDN|Z~ӹ˜SBnj:^]?yd}٪M{OGV/6^݃2=rnT Jt_7VIt:<(S׽?6H?4O/s^˾ C5wxh- 8Oh"`61􁇏!p&vפ< ށٗn'sfC֒[M|i:L1 B&gm"ѩO.cIʕ EtQ}墝%bQ A  MHa޸dD颎6e9G `'S+D_!vAċBK-_#B[ݧw]̲bR-)U6%)_cD-s!%ߥw-Ibߵb%4 liiy#lld%Ef>O@4.9 Zx$@+uFoi<ċM'5A[SF-熤  '&QL7a+ C< s&zwɑ7!.IʏbK#R%I2.*yFƼמć3_};m>Hfw3{&V"K$; 滿bw밭R+n E+( zߝIq02\*밊%%K"2嚁8dG($xXkT,WmW3e0kHŝS̔Z-CN93eƷ2 M $~Y{t[fg':$D*|0}M/j8dT,A@ĬYDY^` 9Aǔ J*u, 7Y]%* j TWIf%CeƦ~v0Q%,W {MXlUԫk>zx W_T )0"z<|Ap][!ߊ)@;IB9SsA^HrY(sFY\gwa*ݚWe:0mfK B@N䴃h7wo_{.̿s,3;v89s&} pC"Xʣ \%7$jn:n* nqK a+]\rա9SWZʛUWG]-^"ęTOE0gȕOGVOEhҏPWUW.=F 1qD Jbz,*QTqT2ժJ#RW@hU"WcQWpUV]DuEޠALʥ.8%<,>IxWRm?[x._?/y>0xnEEN>M74>=!}+%JU0IT~嗟鯋{S_+rm3lՉexV0B!1b\%6k6H r|XڷXsƓbk< gPjUwZO4:'oK9~Ӆ䍙xs[/;"b܆h86@,> `\(%2.`B Q(UdJP0Fޓ=?>Zj@CUt'W_%y>+k>1c(|/W-gB\%PF^d^o%b8;\~i(mx6lfj7W?TB}M('K9?\2J1Fy Ȃ@T*D`Sd4ѺFi`jJ*[DT>z]<4s` 8Lۭ~>ɡoe^cϟ~g_;6 /!JisP11xduI6)71 c j5A4B劍[*'P al%a# "@{jcGqoҹ5Zp3"8M-E4^JJ8'%rg3-DðqfP9CzjsXqZ1 ywEenyoIҾ"xW/X3oBda(JerN;$qZ;% \+gNG&*z`4YzZ4<"ǜ4pVYBaZ *"DŽ8`RPRLJ DaYzCWa:¦+.nn>{*&`ALR+t*,B@»donFŌ/u[gW;C*|Ǭ_5`(~6&Kf6g3ZbBNK?mBTKJ(1r5Fǒ8Q+i'*hǔ`i`^vǓ׳b+q&^`a͑R^H3hϽY)54<;#j\v]fPYwl^r0 iHDZf5EaJшK ] +7b0S=N;[B7zf!% 3E5${_rF6@ ܤ]ۍz8s{[]*a_c {_-h-?q<נM@-^1yD5 x9Q]V0C"WXk`CTpSAPS`ꎴpH s!q-04z L {@ Se&#Q@:x ;cƌL"^ˈ"l5ͭajmԐy:ᄕ\#!0@f$CH$qc0E>s(Ȁ%N/Kq({]yM"H lσa6t:4.nƣM7t<4 =}?KS+8)ҧ7ˑBpnnR96`lB8xU_Bh*H^T,*0cpUMmIQO|>Qy&Wܭ,;{_@sJ!ΕN2·h٩]w\/'M#+ hJFwJEAUӑ 1[sO꼾pOup@ɝNʻr.[[-hQSQ'>h}&\bF_$?èJTpU|7DԔQBEK³e,ڲ_92R.dl@]bMcRl]cvO+gAƹ]v>y޸hi -h0XȴyPțxVon0LlkN?maty M{=r•vHGm0I .D?. z0Iy~ᴍb!뜥+J3άМ*Gq{BOPK=0ni/: /C>%%)_AW=x6G 㢔\'7#"I49w1޻xo QCɎ|QH܂؜42 2Hep\؂sp$u#0s:pM]DϊOU!!aԂJrk+ j5rnT<*{q[rhsTmERDEf~R j $ D# D_֤ !X$w>`aY]8r OHy26eeGI<&Kr &͵$3)ښ9kzX.O`T 7/ O 2^SekvrɗI&'vAѧAw<Bz&%3LQ 䊜LDy# IkDAACk,9ZuCtGi(Ξ'e CMM@'#\2|;f]3u"vFôb֮jmm*Fyg@bAVˤHA D3#>JR7Sq !DC&!ZD#(:C*HN5+jܮ[~Q(}шc[*kDiN#޸rc AV Ԓ@g5#,*hmu{;7F ɀEτG͍$1'-Ho8ҫvZܮQ>^,ilue;ś8|!) /rI 4ځ(-#z& !&d.jܱ>4Mv* |aoj'͍]ϻG~]e?. N28_Շ iޣ@l2)w㒌"ܚdUfK{MڣYHy>Vƛqfv:}ۢ;|x  ma@e, }&$Mƒ )tdm%!wf&ϧnپ/(sKkv(ϟ?(lJoLdƳTPI+0>L\Fަ||&yd <6dn H. wJ'CP,WJ;& dn' 4GG1$jW㉭/d?Q{y{A&fZ䜕7) uEI CV#deIG!dмJb6[!H},J*L5Vle[Ir@XJ[=7.zUF7WAm ;'K0Wqȶ*KAK(OtI^ly{ҵs-(r.7rǫ'\R=|=e6^6A h΄OT6\rqfJV3 tt{?YBޒ[ *hɰ̢HDfmZ}1Z027)O4 "w)%V^A)fT#vdG:q<گ1l״$$nQ&iÎ6/5oX'[e">|UW9Vv#%A!B0KyϭωIBt oᝣoAvɵ?MLYH__S6]E6\o; :2:-3i Fdt# xhtd\ǫ؇jy5 FDV$` #D ;碱leT)f>!-}gNo/uC ۿ?_sx80-v瑠e\A2. мpWcUk}T P:4!7k BҀ" : jӶ{26zꁪ Yᩛ +kzCAY5[Cf9\8_(2d@Unui[]wPﱠ(>hd4,h+GhoʽL$~@[Ѳ`$!heIR*%ё0gNȼq3ilP >295 ?VC V^qqX]0_;~T._Er$FK\&R,=="% &d X\8 N(&2x< w0I~rQɩyY{5Ry>@KޯɣW(#]a:IiueЃn(2Lp%\Iv'< /))f~n =g{dNeJ$\WM_hB}::cKnPY >7rSyM5^;___^߯_o~y\/_^c^G{A; mk M-6ZO=MƅfܫcԹk~\V|a?%eQ^z`˹\IK$'$h%r|6IjU% XܾD>E4q'?v$ЙKMe- Q(Ck[*+{2dN `g#^pXT{^Z^' Tey99*x 9 23(]\J\gJWo6ѝ'_Ctk`U8;$+t.6JMs\yk]yoG*`3b߇a(~'?6ЧX"R|W3<$q$J4![awuwUuUuxi2($Ťsa>-6.3Ex sa{ Nǎ`G (сj'lGĬW逰 9HysQ cmNj>gs!o ^̻#_X /nz6GC(ԝcg=uD:J_8Mq/%I&/ {>&b {^^9{ 6w"]N1_F'ϠSQ#oR;@zx 2(Pz)lp{]!gf'7C{mMm >H[ &l֯^ǚ߭ux$Ϧi<  CO:(VoFQ5~P?9L;.iQ$ `R)8zn3]N-@%Af7'g r<#5c S0_/=eRʍ9soN#=C $\&\MJJ-\:tCDcsUF}kQZ.@m̟v]yv6:\xs'r DƝԷk{nt6/0*ӽ^z*},ǥA* eu*ә[jx0ڶ[xnbXMRu]D!457{w~Y6CW/eJ1)2p,z r (ls"vk ᑕ$i#8cF1Zl@4wF-8zQ9b0F NPBjzk-Kg "6Ξ|h4Bkt*|XWty\PD64qAKJRN",bҕ _j~hӡ*CoEˋ@UX\Jx\aqFA`sK |G ʑ˾oUiM;$0Z@g^x!gA2R`$* * h3>T!$UHIL^)GG, s_p^SX$ h, ;ƴF#Hdr ,)7^i c F%Us!ÝsO3ԣ|\;*[IKTh0ƥLNm Q:Ev!:7sU :&-/":gfpC=xNXRER`eheKDF֩.P:g(}۫6-o ^*rᥭ"g3FiHU#gZOÌz`c44PS>"N[i'vm MVwt=%9Eb3( _.%KЦw!nJ+%)i .~ض'+u~WS\Lp6(3he@BaQ9VrTŀ;I,R1Va䐤`dP-5Fчt Lp7vZm[mM?Gó_Ci˝AiЄnAppDqD?ሳȜXQAbMd$奣;{O1TDt)l.܂hpX9q cI%$x:sW4n%u$eAp* -T67 )dG^MKc5o1\[ 1@fR. (X)i{ء kk)=/e8Rj̘Xv=b5.u' *ΫA9RI }WipspH3Kk-D%+?|۠ex 2vP68ozK-# dRKJXAʆ>*K%`d1Xy1Lқ-Z_ҭ8OzCk78>] VEͨc`\bҥV\rb#VU.6|m2p0B ,EǸ.OqTvR]Vak%#YX6\ή*Vl% ^CVI-\mY,7߱\EE쎞ؽx>/+]>p"WȌ΋+dT /fn̺A֧]{6ך#.&shC }fd$r@4(OD_Z*Md0_{_&q?LQ4Hp\<<1)~8?;fj_bI'_;s~MޯX0߅FSw~Z[nR_36`Pצm`!1 =[ɐ/b'븪8{}+-_F{!](КBhϻUjI-KX]|~hoiU>j)0,%'|SL>G#;:Wg* Wj <̇,r8wEz5]ٕnm{lW {pI҅\E(jĬv8ZH)ºW6) +?yb<^g`[B$}Pp }{q+V;ɥNvƓKj+PK(vO.Q`w4RW@0AdgU"]QWMGv깨+V[z=A'QW$'UW'QWRnhuuU۩u3+ 8슺JR+*Q+ضD❺z05!uusvƺjvudSW/Q]q|φ'nW̋O[L^@>= OA91^3͟Oy0xiEEN>ߛC>t> -yY· wX);ܳL73c$!aBb@` sEmBHy_ݠp7G}*psGLNGĂ &Eݕ: M?l>[M<* {76ӻ(o.zQ~ak~7F#6,>Yo)W{Eѫ# f>>{eͱf/U~9{aſ}s-iӍrupu&i,㩻WIUp4ir)BSϘ8P=d:FY^‚SS߈[TJL)3iXPqRe6l%%8i^~Q\6Lr}MK_oACD_Kp29n * XLzmT .fӰu*2fR\;p&ceqgW{fXψnɜ#BҬb s 4T^3:}а:FJVySOcFگ]`F50R2A3+OĀҢ$ʛ)OJmh(Mil<mmϳ֌m>ZgZ (SCPNQ/n:sJp_`vh5V"Zɕu&D(f"[&a} ч~yp Ljw6c& Fc ƹ7,֧mY+Ev$ wQ3)yK2#, 1 =!"S1>`}HJL7dbiRQ Ƒ5rUcض*= s2OzrcCY2w1i}ef_{gj~K>J1Zm*H`iٯ-UYZ@+dΪJ֬8$UY[^eUnȋ]6iPhLS88JLTd`T>qPÒSp?{ʑUiӮغ^MI1 rEs νi1o(eBQ@%.r(" D HXf@:{p}}';"E =:dNF0(4N+ B' L=qǪ8A!U[.+AGMs#LzA쌋RCzK :Hؗ(IؿuU :XA3*(! xB4+V]:AjYRen]C nX #;2\Bԁ#(b!,@Iv^\!/ǹwY(vf>,ҐO*@$y_\R-EZSN1hX~Rjz) dd\ &՟s\7Ƀ%$ȃ|_T?aG>(|n8)(F"?B'P}3s{erJ|?~6W  ΊN[l\䛀\:$UXN /0!}w\'. gk8Cz—bd"g fp[^Uqwq>-~T)+: Mw]N_645fsF?/a8-$Rs|_\SvQkkSR=y|{yqZph$3b ONz> `9 f{ =Qƞ_yS7 bi7ч0aU*ٗDO} 7JNkX5Cr$TFRB9 }\ 3z,5JokTOV5T ;:ݷzo_;D}wt?{ƍ\Ww]6I6]f]АH{Tk zEq{a6I;2yU=1I칯#h'T3VGLH@*K;lPHAid_g黵0䡱ĵX1p}V -{.lN\t5'T_I+hs5_X,3ڢ;E%ϣD)QPF)!(\F @J#51q8d4mkV>k6%BSq;.D0;tLN]hU ";ec8ZsX4ZǍ2 •@EOVs^ARNx=UiAtnOY>\(O,'D摺DPF4`ڦĹ?ǣAlg!hchnI+&[r$I}NfZe' Ϻ 3nC6ip}95:IZ\t x6E 0mi5K&E'PހPo2);M:2օ]X(Nr5-nYĜp6#&:q hSʽ@j"Ȁ CP@`l7界tQ3JyNZc]ҋN\z:9ԅ(ͳU> +jFǾ_?&Jm,JY+PUIZv$G"ِ-'ِm&x0SJ.mqܣ}<Z"5ƳώK<3>v!bk- ma$P~fj08 5]ɊF|;xY}cǡC?DT>IH +`ig!όCq 8ʡ>Ge./qg:+΃ +̃vd& : $0 q-Ĕ/zRsBrJa4һmZh"EÆ1qv8$Y~s=3 )!xCnjA-Mj;^ja ?T%:$pCFT$rsb1hЊ}t ȘޕT٤ P4cxFYLVYfyљE-Q` BFB'sYs!#0f,KkP1("i't&8yb^z!iGB{˂f1}hC,@q:GAoa*AVb? 'ԃ@QӞ /*jw!KMFoA(_z`Q9H,'\f|"΃ɂ#+ok' PHΤ>vд knc2ߌN0|F0m4ӿo*0?~;~_|ݧX|=CҝY|߿3j矿" 5ϼRօ*%٠˹o{?-R3&pp; XZ޻|z_~G1*iS2/h't#=lp}9}IoF%_eolTowd41 XF]修aWB.m])mkgߌF"(lI>ꕚVflJj֝wH6|8M,Sy;:fJ",wHtnBD%D,Q )j,:Llq"B9 Zg 1ɒy5b j1bcsc/bp#Ww͇\o8J5xRWDUۚ;Uջҙ%Fkz{$EeW߯_-Kׯo 発|ŋA{.w#RߗMQ? WX jo\_ta|ޛ-"ŮL/Qۛ'h=Od-x2@vy[xCovqp:aXa00ZeRQU<7@>AUBJI*os #@>3E҃s i<݇v^v})^ M8tTH32U>}2^U$L( }h&$)i7kk"^nҧy׏Q?7v$ v:sۛNȶ_߰Ⱦi<{m/.Z߰9yÞڗ\O+bxzi)<'0yޮ⑑=LgO'D|@MU1璫W"AiɴPӑ(D)X(D{N$K)T[):P"*z éNEi *xɀ}igwv}w.: .p}#wMAH@"sbn֪ј8vMbxύ#\{W7ZZ ]IHogmfc> ^ZYs|ʧoMJWRšqê(Ql``*o9L킩X[`Ʝxζft!$&H ɢۜA:o w.o:m/7@sA`7Bg-|M--?Gs"V nUobk^WlDu_y=ܦD}urq9/SjFXfٜO'zȥ-g~,s^⌕ϏQ-f%TwG2jYBG>je#}4B[Vhܚc9n'WGߟvkԮWi[֧]ӣvkz}}d[wpsuݾxfwy!ݎGDH~xi6]Qλk]KnOKn/)bԶ3i |9[n.!utmmmNKW2S S4%jۉ!tMa=նupQ_rIEkǘ(rbQk"Ť^WېY">*sz_IJqt9*hXc־3z0tCi{X0U=/-/ƧOvtŬq<$=fnK.k|Yy]Qꔇ㕏5@>ڠ&yvp HuAN]"3cpdE`}BԪ&=MNeĚ Z=xBC2}QE[NQt^v)SfĈ&՚ [>$\Au\Rփ&j##T:l/uգJ:2H, sEZqk{y|7--YTvksERzC@ERPvۙ{T|;~3ZQDRhakb {6RڇL4Ǘw8[sQ:C3( QlP4MяG~d"FT|8~C_|`}fK?WTo:K @)]b@c*7Ei?O6<[x0mxJubw7-+VX-L!.;"fc]"ĕJEb.b L hIjQ}G_=5zm"^Җb+CdXOw];ňd#x[7Pz*PcLL>d?4%p.²ݯa6ō=(̲Ir !*L_2 3Y)ـjLsZ.l8ktIلvTRcq+ՒSFHX$Rg!L8t0YPldcNZi@#ʻ 捏`-7})x_{kmVq1_EmR\.@"mP)cdeזsNڬ! V+5kUBޫBO{v:?b72|fU5h&RC cъt*j2TSblu%8,  X+z2CeGUWuĂ*xygqW_~a0ԫ7ci!N'J-JݘJS79tm9]TlmFȩw&m} U6hQ!5G5oxؠf!NLJb{\-.Yy5ܟvh( SɑMF{6S~͗cI@23ZClMH*ZtzJE)Ei}Zvr!fgC1{9[B:0MzL,zxѨΈ~;؛ƗAB? NH҇u4M!rSz,ZKCPz5Un:M,a~~բf2ǫO(mb3<%g/pe9-xk]VQii/ p2%t6{w~}-ۇJVA3eEBkٶn!.߼>ˆ^Xh#^2ȌB³V򿳿_~://]"3ib"8_k$tChtQdžVuCC9U|z ޔy){HOj~ƈή.mCn_ꖆ9D5M~gѰ_kvJ zJjОFh:Do:rZ}Ps"Գ}#r\ 9+jGI.r]-I K]S/?69;g:(Z#TKA?aadBju&@3_$X m {kRvD\lh<׍fIsOJғ'w~Lz3x. ] ZJPZ] ]yѕ)ʱЕj4Q%~^idj/|t>r~hJ]t]˜f ]5BW uCrhE.'zj] ` }U<u%h5UCDW'HW]*~4tZNW )#x4tj3jh;tPnWt X鏮`;uѨ+A*NW ] ]-3(FВ:]5':E" ]?`ǤE HMI'>Ojd6'Lį^rґCiFW]DWvfRGDW ]5BW-XDW'HW`J;GCWד ]NWh <&gn4t)tc骡:E"QJǣ+}^h= ^] 銛seGDW\3jp=r~d5)/t=+||m?\g.Vڮ9[^Dˀi0OoW_WF~z~(ŨFڠmqUů%J{?}&lFFc,:zo?U /]#vJo?-/oq7ᢛͯnXZbo}E֗ "6?ϗ !?o;pg9o 77c미oosxegoe}lY(xS/G+¸gV]b^_ *[Y eB$w(~0_<-p#px6ϑ͏9S>ҙ@ogMmE0+].JGz:^m6/U"E9 XtDj\&$o9g۝f_ʿ} G(+Mys>Moü_fHG]t痹|K@ :)u5Sq5ZMVE![b$qdɿBrf_= va|Yx0OE "5c-E$5Y:Xl̗"2r=Z'k̮6 2=h14"SAosXjur򌦇Ҟϱv6-M0HoͿ징<)9i: d6Q,|TqejcGosaL[D9Y51PL1:VjoXluFqZݖYK} S(gC ]~ j(鍵&rw@*0f="{ jszX\fh[Ș+ q3#j)& hb-w]Je!QД3n#@~F$G&QgkX*0!B )6UJ3coLS#%csU"πY7M'ո}sYM4't`#:WTyKZ|Hrna8Br\`eW5ͨ륕1`9VҊXm3)'3Clkqƶ3 |nֱ9ɇ-[?1%E[qI#VRG[Gwhsv>,l,ͰkBm`5 jg:y3*$7*s.!z6b?`iվب5Jlz,6!;3M&(;ˤs;cXc@P"X Xufӡ8tGj/‚Kvog0`L EU%s \,NJb XTTPtv@[b Z 4AsPxiBˊ#yP\ ({<Ԫ5tJXS^ɶZ](.L-c7 d X͎# 2  k5h=*|ݻѡ1**eX/H1&؉yokmNաR- 7 c l@lBwkYRpP\{B;Bi:U`R7<4Edu=ܡ 2y= ߕF3x@æ~(Ю $( 2**"͓m<o4[SjUZ]A.ŞU7 {Ǥ< !.A -i<8I0<'!D l3Q6@@ fet=5TLg@QŁ.єՠ":rl*E{e̟h_uV u4:{w*Zq&-R'BUcܫ.Rz(Y;T`HH/W.!EkmQ"z,5tYdh@HL,=a]1LJ lLcBEєe!B3e={XQ,W&g4'X@Q󈦉vF/`O.v d]|ws;殼p*zîΏf.0B̔ZIX|L(#K<`" n;Dv*YeVBuq2I <) Vp n-D]) >@ &=3:ke8Uc`eF\0/ j,W]dM%[j2h w3h8ҧU,ԀGwfFFby3EvMl4FNG/usM%_>ن)>lb%VXA8;KK~y "Ū>&Ė|+& 8JOH HL)Xsv.+m˱8ZЮԊ1;} @5!R- EWՌxX(Zgeσr&X!(k@Nq,CJ35ȅZ. py:Z'xw؛G7 Ҫ2,j)차fh7'4=>Ug9D5"Nk#yZ/޺ཚ""EV@~mJ`Fn >о 6b(/vs?njy8Hc'EcY 0pRsb0+t &P Neѧê5[zUuri ZmD,AMzW[7TpeŢ&H`_P B`︧5Ol0H5zA\gqQٰ"wPydz\{6!K̏GÜG?3jlvGYM ?!/p[NR;n[hCڭ\^ʉ!yۋ/ "o񠷼͇8?[ӏonοƺ={+~w57N7|#P~&?~its!kߏ۳ q?֫۳wR~L݇sWϝ=gHU@9m) A$4Iz{I ISLE9I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&ԕRhiy;I 7[@@IؓL&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@ ?J0f@7٭$ؓ@&N0  kH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 tI l⸥gpRpv@GJ2:$g(MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&N' )|@>Jʨwܝ?kn6[]@7fK/U\e!},wd~vSzi_o]廫ub8 >|/>_.7^k?jwr/N96|q ~:ݳH(Лa / {|e 2~C~xzoUz<: {dC^Unś7꾧~AzK cZw c2}]KAg9 ENuWY'l·Us- s,]Ƣ}#=o3*[ae~`K\Its|F9ǏGzȇ} pvwr+|~Gso6fk}, g:{{}{+=ۡ>t2?}оm9蹩GnnşȝKA{+f p5[+AKGlAz*`Ӗ;ЕL.hѫ+A _;_+aptuߒm0+Vzˇ;]v3te%SnBWJPS+yCtlfJz;]N(芒5q;JZӕ|N8QԵ+N^>ppt%()ҕOd!`x3t%pi3fPв?v*]]Mڒ`뷣nL[+u GÔN$Gu.KCt}{nvvW~5qMd%c2c19Nm{ӤsQ < ]< ]Dak% }¹ ;ג k݊l J6*NP6[r98mܼZ6GQFPZCt%e5d%pf! Z:zzDW ޿8K_Lkgw~󏇡<;JWO]z'L +x3tdVJR:vNȄd͆JЕʹ̱ӕtut&7DW| ] ֳ h9zu%(e`N 6 ݎ3(hӕLN3lpΥvuf3Jкxt%(*] ]EG.lR;Еͨ+A^]%+V&uظX2GO=&a2h?CD5Iѧ"oiR2堫IqDǚ;;yI!dLq1wȸ=GϺCNѣ*F}~Zs>>Mm?~x_?{F8`m]. &9,07A@6%'b[˶V䶭d#~4)Ȯ*~_\M T_|{6Hܜ=z)# b1>!n&W _ScVoƯz?_Zl=^0|Ԗ)-[\vlg~uj2XF|4LDV5\#TK/{|k8*;|3ȷ}=D ʼru`zқ0m6ƫI P|vU|TؠHˡaw]n+ n uC7,\҇Nh\ 5V c8hʃOuhd2FIG*f]1$dTd.A N"w`.=0/\J(/?_sW: kZW/xVNĖ]]µe@4zU>eST N˃Waz*; ]ZJsV B} 흒Y<٤/ۏ:/A0U@Ζ"ӪOB.8yT&p1 a<{N(O!Zd! 0)z.uǛ=7{As/yB4U7 sy+,nVUH`:MG怖M{pNzc~`:-`.ᰰOOT Uv,Ě+bOG+M/r)eYq t2>#F_[%T29M.Mzi8us?^;!lA]/p;'D-lZ:)|]>2Ϝ0GD ǂ+=wl( Eܗ0ɜ"W:U) ,*ɬ$ZvpP^+R~4$O}Cl{ ,wfvө Wikc>L՘<ꐄfRD&A'˓Z,W=&fA:s̆0)a6Z0Z8B2s4DEϘۨUTXЊpXqUPXD^PlFW0CϦ횩ٯ;((/L"8Ui0_+y5ʹFK%Tq\)$]tAAlʠ*hIn\%$r ɨAGV<ĕg9i3gv,N >YFi2 \a s2"g(cXfQ@K Y`$G3NL#QDV z&iLY;\vQev֙8[YcW_3ku6EǰmLkxZJ (0cDȀp#x]*aU4K,E嗃 tiߋ/~_= jy( \ +B"VgP O:>|p|0\F2ADဨ`'O?qrh})Z !'pQ޽ xaZu) u&&jǮ| of>}ߵEw5ό5[+Q:KKB0Q1 +.c.Ĥ1sJ`"]vZ,ZӐ@z;tlDWVo~Tp-DW9٩7}}?٩H)fOvrJppzB-Oi* ɍRÑ>  sR*(*QD,*'+;-8GId/V:U1ʅq{Nͼ̅Z;)Zs! }IJf}VdRwA1Z/u2qs7й]knmxUfAʒŕ>OKٖ`C6DL0]'~B|:.6!z+U q^ k@yӚ8-$۷-;mIQ (4B<Уcx DM"ȲFf ȃ5)&vCWW<@~M퀭Jxs.qZSIj= > 3ۇ:t iz°`qo]c0uZ!SAT)FN:3+s |Psiu=Bn/P'.G1:ĠPkz sUj.IH]CbR Y{!F_!m2'r ^);;F)}n}[#8~φP2h!< 9HJZ(Msܐ*# YYw{h*f`DnI]8d$VH-#uP;g=8'M`)8^LOvډs'*::le4(gd2\"NE2Na x1dB\|MBFp`8!"hCkt&!]I1GL?vcoߗϢ'Fy.He&F9NkT`IygN=w1cqcJr2Y%r31Ep ˍV{bBY,WPE#v&[[O^u$XW ڧXggZ_4Esx*eD.QDI(Bh,M"YC}iǮvw< \jeAO1r~|:/baӬ~3K(f?;kPDְJAtUUjȹp`g={E] $}erZD/@Ҝ  #H gP>Z୑&%e儶D.KRYf \pQJ& hz']*ŕ7ݣ4Y6~oy-WOѠuW7 2#NIYVgτcevdbĹIYU w-*_uzPpVuu׆>d Aa3(uJwsK ޽;m/Z{l=p~LY02'?[{,L!@>!19hҸ$AD Y\ZL`Lס@HB .9C.BdCؐv ,r"YrCI\稺wos;#ホ<+`Us@BWŤ+kbRػT*軘|]LH_7bk@j=.R=Z24!4D Dg w=lWYR]$)`tn*)R2MQeHHI+ m2u@ H(~"@v"dI% 6JFU9ͺGm&F=dC/|nq5u'4m`^IrYds2 !3y͚_2Ne0PDAprij_,9)c|2kKk<6=繁|p=j7;ޯdO}X[λk6)v_hÒ0گry?]":>b\m{㋿;mA[w6{f#!-Aސ t|AKNYZxnƔd,4g;uBVmHҘ$6BS%TzPcz[z X $ iٌ;c\eb#deiM9jU^sc{0kkt3Uሿw5.JأT`1@R!hZ[wQ`! ddp%zbB*1hi-0,dс-Dol1T 12rE*p " d%cfzDXe4+O0_7y-?/Wk$IP8lY犳Ѡ`>@Ήq^bD-RklTDLP }x, V΢bGF V*/wFv!A]) TJLY}^= _hZtJoonvKovOaxQNdp /0>5FRn12"P%:_GN#Ǟ{-DN'b[Tl-XVHFalT `F G9+yM:hsPdrf \!l&YJ53J1 6P,hR !E 1B>Yp."ekBXX@:# ;s7xI7ᝋТM!;\))ֆ J:=D_tʒ9 Hius; 90,5<3#H% ]ͥ/ ٶ<Ț9 =e5iH.\lFxG `'xT;_5 ].?K) {jSzE]&y# Dlڔ.aޫFl,$ SU ףa*†<(Xl#z$ϖ>8e ˌNGcrjҪ͵n5ֵrMϫ7!Dƀ/s/cG$c}қN\z+֍/p3wr0.ǣ;pK6Yܘhe"&AؗOY)|ۓYt_;JP6J+"C>3YGME8[\ r* >}%=& "<@BzB;#|ܡ@F&_G1eb6$RƢ YeK Dc("֠zɹ,< -1DG @Z0؄DB!+eBU:>9%x%zr<gexs7]xcОe-CYVO"KqচO#ׄ6|Өu M5j泗ހpQ ϵ8cL2lruEoMcSh4)@;m2x(v0FQ}߂} Y];U+k f+-YN +;Tc%{6&_OڵIŨjKƀ %Mwi]DT}ؾ(9rj֙Ub @esI{壉 S OFPnӮ,CHbq_6A!fDphЁULӥDgЬ})蠞t]^-!ԝyp ͡i.{FuhPՂʻ&H'"3}a귚& Sq4%ϟI`*cQ͸K ik]dIuɞl$623Sꉐ٢$Hs!|Lt-d|T]W팜I'\s!T$eeLYTmȥ@p$ȓȫS{K7!h4au K o!o:56m#ntTMN'us:WpFWgY"ÜDBcrM&$If^sGb}> eR2M~* \؇Algo!J;+N8< ue@ԚWf͝-$,.hqT#bPncRhk% 1nծxkaݻÄ7|6:e+ۏge57*kǩG3IVk=<\a5k6O]gXFCIݝrz< nxt{ut3yz~Iol/li.^-ui7=G3|PF~ɧ[^]xno[&^Nf﹓p5QEoǼ͍.er&ϵC*|{i|v|Q]4sX6^MMlX-SwGw|ɩxښ^dx/pA~FxSHŁ(&C "bȉ$Mɠ}(!%& x3{>Zx%6K`cisk)l9n3j4lON<9_;|{_9t?FKvϟ~1?]DP*0YX(tlbT{l+^1@1H#eQ!i(E(2?"``0ᄵYĿͨdТ]x*"Z {, V΢bGi  Iu:#g̹cVkzW^]SF|ONzVwUw 6}nj0g+\!PFɑW HEx[I5 3.F'ܫZp&領yϼc}3הX"Ej~=X.)\fmZO3 qj@wJ֤,P~/6^ հ?L(4& heэ`B?_3ܤsN4I<qZpuWF$J=>5!$TZ:a>_=e\'낺ݴnOjfÆlci?{G0Lkx.! ,0>ŋf+%$eӺmVKMjgaݪ?O}ExHK #PL}ECYvo/v=hdcYFϷէ7&oð`gYE7]%Nz5_66)qez盭E *#nMNwKY:'ny 'Jo>]+<'.]|q=n3W6?3=>vMfYY74eN[ :>FYQ]Gnݝs8yo`!!rԶ7i~Af/ooҞay'g'?ma]{-Bw!bs/z\@ms=/YUu;쾷hmrz:܃\weGAova8Ok?O Td!UPˎNJ O]wt2JyG "]pjtI-2Zue4j"H"]pu i]-S2&+}U*pFWh֕Q0jiJ#գ+h%+Lͺgr"]pjt]-a2J?w+aNtVվg]sV?|鉺yP~pqwF' TSi#BO8?x >bm:8 jOk3VnJ %cOǎO-.orS%cBWʂ ?zrǀ ۽jڀc=EF7k(ZZ ѕr5J "ey};-^\$D>x]Õ"O={Qb*KW2BW2jת.38 ?uuejѕ҆Cd7jBa\l7+p֕҆[WFquE"D"]pJqمZteD(u5A]qtQjj])p:|qkѕ +L0jQR 8ԣ+íghc*]WF)*T+~\+ t6BA\κE8.m;ɯίO 훾}[^^_^/U4CKuK؎(I dy5m,qe6F̾O?&b[[Ei#VOiw4F~e вO1!9z6z?2}RXD,9L\sQ52^jv+򾖷$bHk ݅e҅1cG oLQ1#g@t䌖`~0ŎvkѕX6?S)ìux!Cjp97au5ܡf,mg8jתaE2?ѕ>uueQf]MPWTѕ&_c,]WFquE. qE"\jteTVt]MHw&+֕'p)Ԣ+ (#̺<5 +0U+íueJוQƹu5E]H5 pLJq٥ZteTϭ)J4v ҕz5qJוRYWSK\[I'q'OĜtRvR#Rp}7g? Ǭ%v)ٔ^}#58=Cl_j F{ Q4&nHRt3pբ+ue<|;J[V}:rӡg 8ZqRvRiҬ](2Ċteѕ&EWJh8J&+dffGWXR2Jf]MPW|Jx].@-2Cm>iu,^jR]wGt]Ȭ {&2&3h\MghJוQ6aճ*xfWؕFW *wXxu%de,U, 4+ɘEǡ݅\ѭX7_1!­f2!Vԣ1h 7Zz4J+X\Gy${4en'dx pInJ۾Q#CE6Tc=ephhISFYZͬgt5j4T ѕRSҍqoFWz}lUl{8w# 452An{];j"])0AFW˾]mu 0jB 5JK=JוQݬ te])slݾ']큦Lau"])0:FW]/~(g]MPW>X8<`оte֕XRܺB'ߟq ;jteT͓A ŷ2έ)rB,n/%JߠeL@^y^|L&)Ⱥk)wPQAj Kvϼnɑ4yeRM7Giuef]MPW)PM2y楸i]-?l~Dvt[V=E=zyg |E(CaYWV=:LJ]nZteKוRFu5E]cE"jtePt]%YW (+j]wh%+L4j.ʀFW+Njѕh%g]MPW֕/2ѕF)]WJ\u5A]  5])0c=AeEWF eǮ&+ZX< k%ϸ¬O,{ׯ/rsjP=[ s-Wи]5[ͮ}cn>VkV!gK7'm_1aMs< Xn "'Qs:ӓK(5=S` LI7kѕrSҍR`7+ܲȡxzXWpwjm8P8T~R#tvz@ BER`"_ 7Zteɕ+dYW qE2])mD,]WFnuEIE2`jtejѕq2Vu,b'3ve]O(euIj *p FW+XV+du5A]B5 pEcWB-RŷueۛY-,U pw8.q1hm^UY9OwCY1wn2,oZ1eCC'}149wD;1AQ5^Oz_8̀,;tN,~Wor~~cn< zt.χaJlt~x~I/FK`q~q~~#zޡPEM7ijfIVGYJXsyn`mg2|E{|6f6m_BH XV%EFTJ4kzN^*])X,iiUR|וQΏ[!]іUOG.8R\WгCʁ1GwUQ7N#tEvzҕ|q kѕuef]MPWT 8jteQjѕFp(1̺($p"])0|q\MgPiP[+/ΠXؕgm>JWFKt]eYWԕJBEFWi]m J)=Ь *D|}W+~en+MDh/]WFe$uŞ|UT8ԣ+)Uѹ݆'*ˎYⷓS+Ӝ|ݾ=pz6(1A9WGYd~nw'}ul:F}ɧmiDw7Z|>`?PG[Bvgܞ\qO ~cb\o^۽?6joJ4]M^s5߿V޿b-^ퟕ]XtTE^*_>1߽w8-2G[ w87ρOͷyJeށP!NiAy۟߿XV܇K%WXGnzч&9DCe6ā\CC32)FO:G׷Hc eJoG}&_w_Z Y~˖9u.}mjZIv)pyb CSתkϏ]# 0Ğ\=[";6%7MmG8⺐]?X⦓ I }d iYrBޛ~A=z:m7PnA%7$h2:hG |Rܧ~гSN'ۺQoY5^BMI%}ºдj5BuĢKI-7 "߇G04h#K㍰z=A}Ȯsz'9-;䁤K-8rs&5f#"riEO^,8Po!wGў{. ]wAI 3ˢF-iihl) iy:VTE-"i[KӨm jж]~s#zHQ[;n{nu. 0Wg\/ *a =1 kst:PVPB ٻVLU:I}Rc+ZFb>V-tC*D4`Ԭ4 3*0 e>id<$X( ꛐI:Ci*dL'B,l*!+u ^X szeY r3Jր1uoPP,Lh=4F[(2#χPzQ<t/XY;C7a?%ӌ/%RC%:&|B(sA ԙy/t^̈KQED$' GcE( / U<ۯLЕr.v|JO_j*赫3[ݗ>Hh>Dfxs:p~~t*&{H\C RbtLECNZ,J茸pΠ%TB;뒄 Xut kR([|Pm!ڃjF,F,-;#hﳱyBP΀He#kvqAkl:% EBiv%&H3AjDPqkU;P2,,{njpM"β@ A16%VmZ4+i=liVԀʬ$޲(m@VKKުhQEx iIƁl QH6 x%e SLЍ![[&Wi iżm9`\ T5 Bztqf{trlѓХE4Iv=Ilf1hmTkM!J՗yh4vM Ƥ=YBϯoyPbpPaRDlsE6G=Tʍڪ+)o^tR"˕,.TP=`ePˌ`*1#K[Ƞ r =`-f=*-+P!>]/D4cA$\s{ 9,RAy/aèIP(EeQAR܌EEH,zBzH` t@ QYj0Q3j hNmS; fnfR;5ATAlR>Cɗ U{&dd, 6#TK.dP?n휼߳NKîA*DEKnj-*;k*>Y[Tڠ@~ǃNZS0h¦A3 _ 2)/$čtro*AkO^ %׆.tJ*ukE!~1jEro "&b9\c4ԤU|I负PbC't\- 5uhD;S`]pB;&^yދz+7,f"L5vϵC^;[zT<:zG^>Kd@Ae_d1o2Pן9c؛+%&?גbn(l{GhG,o'&=:e8cZ|5]}| ]z7]gf:_x!.</A XҕVh&g/ [7Khm5#ͦhj2am $}'#Ey)<3wU@![F`$85cqpEO (; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@Br~LN -fש86@ N<̛ ⵱r5Ѹk1 d5/> 1x1G ௗ$ g]R bZ/^+oZy-Ѿ:K76Vc%Uxv۪P!=ܮeڙ@;fJ-֧8/D!w91g]&[n/:%t%,9wj3Z.zk1QXtr1kW=z[Ul6/(~{=ҙD ن,؋NLCywH1Q#=>`FnG.Iwء)ōʾGBH6 Ylf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0O,liecLfa9c\3cɣ7 l>c=N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':n}0do]-bX&?whzK$pۣx"gBO]W/nrX? .Қ3+I$@~DaL(Z>q@q@?I*v@*F,;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@P|k_Ÿ(ܸD(c)>*[A4Gpl ;"A;]虮2terQ`xr`Jtu?= ] ܃ C^JI4"">ntc+ 2;]JmN֏I]3QW7zi LW'HWZ:) t+XxtE(c:A2n+kG;bPtuter0"A61H(5S+hh}1H(=]"]yM`L5j,tEhodJʐv;l8wD >Yy͋C / 1G ௗv" 8& Os*>k3ʻBŕ_][]L˲ 5=Z[_BnͿ8/4&˷smc܍Ab xyw^_?k-Z3Z.V]:t ۋ.ysuD33ѾsqA+o͈tL9@phvZ]7 OQ7D#+Ƴ V:UBxCW+>FlS'{uy<2ueAWC/ =~Dt:Xq,tRc+BiN P +;R7o))]ntu:tYc* p+MM1HhYC- ҕ[?"CWGOW2j+k7kXk5"^ȣ/ 浫S+SaDt^g"ow4mk_zXzopynY:b[5|?{L)څ nC<1aG @Yjn]٘{}iÂ1^쵿gnQ_>ZΤ躘j)ckQGxd1Q}3tL|פ;SDͱcT-1U4x*OohW4anIV4.Czv~[DoϽ9i#IC>g<ʛpٻJ@4yd_yd͆+__?;L]_lC{s)C \BU-׿~i6g>2\pߛ׿pǀtvm~g} ܭ |m/|sg< s|{i3Ntfr9&&6|m4[լny|P?Xp"w-zءMtRar Q&d?cu"T])}p5,}m*吷B}F)dn:KaOIt Yg|!6-\'ߴd{0xϑ8y5;_?%Něq[@8z"Uo˛}>aƬ۲~a_c~Wc1|~v7 ?՛E^.^ Xl|qӓ>xKK\ot|Snٍ^|Uˋݷ_tRm7s 7jxi3P1)X~;տ>T5ɚ'kCRR Hyԙ5ϭ :Gg3`3%%NfJle":SL4N]Քlkj""R}T{Ku}nOasގ+nR vij][HF94S dk/gIHXh5ƛ6I*C(%n"CYi[S$ck."dV^}p>Qշ /"(wt'zR|'DN^\K 7]yoQzw~%ws7޾]?-EyKy|~7PLw_B7Fb{ۺW.'! mbܢ70Q#Kn}H~Ēȴmg93pf.-B*ssas~ lw 5tw<-Jl7L߯? NwQҕꪚ[ogj85J׀QšqÚ(QlaCT~0LCƎHT¢p:\(6 bã s*E'BLW΂`p#S )B+\dEr/`l2LFΞ]$&0?c8Aff1sLBGs1G`ڥ=3>ѵ"Bx7=̛Y%_cჟdF8=x5qBLBlDNh;d>4؟OtR݅,Wgofш8 ҹFHE#I"^-!Noy/*66o30jX⫃+/EA<5K>hw['h?s,]¢M\iͫ}:g{ Ifk2Fk*Z"s5ňX!=fΚfT]lubn4rō.Mn n_v6Febi75j$Ó?a'E/ل kavm1] |7z^bӢW4&ڳk¿uHW&r'ib/ ufwץwm2ŶCPZ_Z7]pNwtzfb< GZ6ԲYq[=_5վ=_kfww{y'|7Uwt2ޅ%֭O\ίVRftX..7v{ڞ&=m71 vn`DhScu, ђǜ=MM0GZ~JY~8{(ćږ3A}?#ZxUٱ`B"W9yigG!)z歱UB 1JUȒLK%LJ 8%f"k]9wo\4 vQ:qub-d,m&̉vQٻ !Gl[76}o}-;;5H 9-m,dL,D2 ;`FxPxyƽb(828p2KiLubBt 'U'"MLIY,T2dS An׊@(*+j9R5f @9+Sz>'$ )mnvr/59>s6fGV,z:D=<0Uh`Yӧ6 \P;:ʱ a -p(`.> նFn!>Y /PEWnwA7Vqg٨$p!2<(!4824\;a IE2&m?]B_m C#,L7vfaĻ0ux g?*CeY&]5!Fwy:&$;x߿ jj" 5oϼQօ4Aɠx5gTג1'3-p D ^/4xb;ʨH?L_ *NF(zOh<|cI_NOKʖߒr}KR?^aC{n]S_q)VpHϘPvp"ts6l4)JA_ &z`Ξ ^x=mj2ՐS%t0KB1|g:?#^ϋR-.%.D,M܊Hcc\TMY Yƣevpwa{NFuA~`6 hG7l0mB8Ώf>糈gsY)m^nM[exu>^w4b%MYîn1oG]oSx~;QMW-.ľItR>Wuy-3'IXÅH<ٺrf6l>0ILuʖ+_hΨM?h?deU1\왓yӡQ{"Fnrћ U"V;9z%-'g`L,<3B36#ń SF8Gd>J[x*G;x2u1[pw0j ~YGvAj) (a97F/K*Ffˇ^j+E~bW聐ً^q@A;hV 䉂M]^|s+ԫ0NgbW)m/\~|3Y{X&qs˗ eܝW hI:;=oMlzNzy>o8|vzl5d|}?Y+2]BqBMÙ N '"$uP!Wܽ߬; :踈ɲR> t.$Kcb%rO|@:qV|(㋙CO۬˴ZT1cg!ŻR5gxí.FtJɜYr3p91uȜn0{#+cs]}i}[7yƿ2xղŶw<;.ZpMWi:;- ӸQ~[IV49(` DC0`" ]1ު9ԪqeոǶjܣY5Jx>5KłrpȘI>Z'2ڙHjͅe*t-s1,qL9w1!$wBJ;_|UyA!A?<lu&vB~o56EZC$QAUjP`.JC:옏~h BcL =:"O v1I1rB+Nf5Ӂ5)&VAʚ#iIMM}l0>u5v'bv| ˪CntHq[7PB0I!pB480vzS9G:NwqQC%\\'ª#wIE'dA78S()B|\d˜8^ʨ9lvc𼭺$ۿFW`!<+WZ d4Ƥ2Hr=[}E8 E{P1&pC}D,{ڎ< $$VH9,SʂZ=rpZGq{`XɁQeTC뿊_]tM>}7;'mhymOmmܾgE8'i RU$BN퍶I0=$%X>3lN)4"Vz0舎Hɑ܁d@Ҩ sfUjgϨ<]h,xKPaw ;r~1|w Ggdi".j\xȔ^20*(&CP [ԣa(2t_3rևSH󯊱/3T#ڎ54/9E^f,PLk1 AwS pްoLhy7T=sL`N<Ğ2^/M;ֈZ:Zϟn![~I?BuyoGE MrQ{^5c$3PcBI!P]vC'd|t<[W{ aV|y>;#+iޣ+{J-/ceVLix.VK>|Yl՚nή_UW]U+ᗫs@ZH]2}zW_*zpu(h:;(1O6d(O?~y??ݿ~w߿T~ˏ߳3%6 H0I`[K럟4t4XZyVvy ̺[^mJ8-mk+@Z|Ia|jEVw{H[φ&{ Q^|f10;+Z~*oQ_F[!6뒱s#]GOvJ'WïPhvXciDzFAb %*@ ״GCl=o#3 "IAKGP9M+u:99Ӿu]?mNϫks ot9V fG!$bBc!vScE"KƔ@cM$Mt%jI4oEb/'͝yJd4E1#; ݲ!vEabHBYt$o+EdI\Fg"Į-wEK#+|akqj8ByKo:1Pv4A8# J<Gbf%@EXIE9$˙!"ulL@)F PrR} 2ZiMCY]TEww_P 5Ԕk]1eN͇1f*VGL ȣO4]ub~b F?͛jA0ȺG?{6MO&k=܃f_J~?99a2䯕P=V 5uH>嶇[CZ,ZyҼgS6Se :!iJc&ڈMQvSyی[$@,W$db.EZStUYkmns3Y^mҵl'FWc^A,_Cf}~_|V#,vs(:zӏf%m}y73AP fVƤx;@yd5 rh d`Ԑa#!P7{P7؋AJa{$^ , .&Zyjc%Znglc<+X\3䆱3ҕqW{K̢$Ip/gSAQ+ ){98b S?;t)g㩈,s=g3P2ZY$ e$J)PNZ"!3r%{Q3*+~_o4ya{-OZ@Owݑa;7:yܪOt"9 wΈs"z9Bh"E\HFLuT٨"CmTw=th,ԟYvNPə}6tB;oe4:Z-e()Qpu2IbQT@l =,1`;YP` HY g^H6 3r* C~ 䅣bQ[@6α3AE:P P"xH9CȀf {mQߥ>aĢy_J(g 0hkIs td4ġq7&{47mFT}u1HIІ . ev̿*gqXO~<(:磂hXH) 6 >D疮Js"s1`sܵuJ댮(֙ J:6D_tʒm$:م?0#b*ዚza.9j.}1^ȶxacY&6Ab4;A;^'u}5m{pvt9w>I{1O#Wݎ)}~LӨ_cJƔaL馏^溝䭫CE; B6 tc M^5dd[v&ž\ߒ'K auuBDIZSL줉L잹-Et&Q0#E^{ž'A^Z3VfU(+ٱ9{҂8s۔ 9,F%/IвH!ÄOG@J"YYq1kdq!#g By%G:qB9pDbPB7|$c(%r(6K"f|K*7!a2 ˌNGcrk͵5ֵr&  %~D DHE2d-&dբ(ZJOen%D1%MH2H R69X {]tE|'zr|6^uN򽽕'h6Jǯ<y(/no$z_*=/R +&QW\'E]UjQ]]1 ^ҨYa푺Th4B썺ԪޣJ1zʠb{*.;N]Uj=]]1^%+'$>+&썺u]]U* TW^X+&kuUɵn_UcUqPW/Q]o̕EGa 82$F͎G[:F 4Nv brdxon݁WKXj!V޴g͵_)6u]lv ϔ_6y@3w" QJ7uj|[|.[9__ֽ̈́]O֦M^9og[xATHWVz_~FgӋ٥ ?H$īgܚ6=UX|95kJUي7yYo@H"R[统n`t0HN r%A7~si厵YJ>v{4煂A96q6qu]C*Cc@k&ӊ"R늭pEjms^g\=\w#!U`w98AWQj<МWJ؊"+qXS節H lpErm&ԩTJԆH++Rd"Mg\WZ)#\`m,\\ ՂTʐqC\2J8FBZl֙qE*S̸ +l`++\ZH>">+PT H.=Ժ8q2+ 8W(0=2(V&j'm͸2l/ٶ09-O.fiuPWXPA<jW+jrbi›1ɯd+{ ͭENEP[r2-N*඲l0.[),7fZ7,j *Q ~ 0yMrß)$~ ҎDuo(f r\r%&'A\rp䳁+Rk@C*3|G %بC'=9JZ3R%fGg\=~{mqLU H-$+RiUUqtPߨ'= qjmHWJ1 >x}FWV%+R W}e+4 WqE*d\WJ-9I>Hcj'+T "GW}ĕ 0 VO*WqE*̸!>KO>︊iU"N>KC",pi$WNZR?Q.}?l7Nowp p8%Tv{w4!Q.Z I;1M*Θ!xFB^6"Ժ22Uh0i}x\E 'GWqj@+TD W5RhW$8h6BRZ."^ڌJy0^3 zEQStRe\WZnBPjv~> 'Pң&7z1}?gJJ?z7ꥉ.N~trb^WĎLnY>n)}M+O t\jX}QG5>\bpR/V<vOlK]-2<p2Αd&VbMPA֝h6pƋYQ-O*d~]>i/J{}uZ9=lpǥ c' ԭC/$-:~dG<Wm^t~߾|mO޵Ws,ߴn9^ N0l$hj%Zt(M=ZCN0ZH0\=Q+qjM0R!㪇2No+` #M. JRT.㪇!H+ +W$еqE*}NW7.婢[{R/=NuΛ ^/IZzժC=Q(+C"d0ҷ%P؁PJ :ԢFٴ{=rJ<`eulP:&z8 J)W$ױYNjCPʫfz-FBA8|(FGWraW!DYR%6Ew\-͸z Z#\`8tH^(:H%Jii2C`++R{%*ϸ!0Z2 W$ױHmq*W=Tڧ馏+ He"Υ+RstG\"p ƱulpEj}HWR#,RBX]\\pEjLW2ʁ^$Wk-xV(/N>UCz0Q͡wF՞KIjQejrӫ.m/ )-AYԦ%F)+2$׳@DTyGLcҌp3rcUcvũl<ϸ+r$p#XU\wqj?!Q*H W2W2걦h#;W$[."!+T "d\W8EW(XkW$װ2u\3z+>=Hp6B*HW22+p:u޶"A8BqU*u\WqNW$i6"Ap2TjqC\Y/4D \Ԧ"!R$a^HAc8ָsY*Ev8uReWS(^fDq)%E+A&j 1ccGZKaS DFr$8x6#9%RMF"8^`h*N.x0L8>TTcM/CpA3•Bf++RtBf\WJ[FBH6"z:He0W= g0Hr5\Z|tE*˸!:0 H^w(m+R Tڜ#8 Q+\ZTv\^pe@pE`+k+R]6f\WLz`%Hb"&2d\W0*:&QQLq.ڥ[ԫkWy=Qup,ed+B62ǺQ1O@b4AFI\&_Lj>!MG4 oRK;RKc$l׽(8Q00Z4Er Uvް=Ieȋ@CNKIlp(gfZJRٚ{ȸ+hz !5Hdzo?_Tf~=,3/i3Lgk_]s=غ{/Jl0UfzZ^-ɬ'ɧE[He_=o~qUҗ5XA}c7ՃɊ^:6O҇~͇_Fc n.x޼`(UysWcYejjQx]|vI_+br qtgfG^OhZ?IFtZp{>qՓ\wOOpe[Nm=jv~>Xvke8ϛOuuM3~/!gg&-ѻ!vEqV\_N1/gr27|o0`:|epL)|UN e+@UDN9DYUŦ/ya羨kRϨ^L9oX͎_޷GzMG[eV7pyEvֽ{dB`s6,pXGFġ,EܽGf4>2&^mOQN_L鯋a2~-L&(a5!&>TxR3IЂۥ3!l'ҚMvpFt~/ؤfRX{k<ȉ-9!Ϯ/kgr^ߙ͂;~GI6UL3Oe)[oX~Y3Kڪez`~H=ڭB,X(;SN9ص<ʑ>^'&򬼨 a,OӿbaXI/x8o]i=͗fT`E#Fo{JZoWl+ z¾5O+zkӘ(wc+鲕G|˿~%mZiܺ&"jFRu4J!%݃|xO밣Vf "kivB^9t_q*TP*Nպ&©|Neޭ^9o{xK*Əeq .We~?vVTy6Zf ^zkdjVksVi5"h/63,U Wt%Hܗ<kh6Ѿt3x8@oudqB5.hf^l-:*k{Zha+ ^_[.UO@Ң nKP$E mzM}C:R{p(緕H;(8Z(roEUbNm\Y'FejMB :Ox$(:!X.F[G^%S:D-𮵅 bUɞ>7}x.vI(QCWh|t=0Q[$ AkDr~9 Fb6;5p<:WV]{0aF~x jyno߽ǫ]+=a?ᷯi{,7ݿѪ߼.%f>)u(]Ҏ'ZJsVWcW[/8?Vj˦=Qi>t$L*Uk7-cڎBYiP0]멷L6 #.C-ڸ Z؜k]L>l !IAqtPIsalDw—VvVONa;t٩1A|.ǃv..9,2+T樫6ي1RV +!}(1ªւc(r㲜 jj?b08- M 䶥 ~ ﭱ;k~U9@_1na:u0cb6@_/_}{𶍻IK]ߴҪݤpTF(a,ajŻaEw_yz]b#{䞃ݶ#`,e%H0+i;)OO;+::k*CXfX0lÊؽzP:ݟ0aEYٻ\W8J6b߱Gٴ;:&fj1k萡Y ŵ+;N?c3),h&ʲ>[D70hF#%~oIǬ=ێ8Lrx[ Nc`egА()q.lW\U/"M7IEv\x$ I*"rRz"Hq?(!!2 (=/*i3FˋȺ.y PnjlVțtcu_ma۬,kBxTŴKAvQ&q1^+ŽZJ=kƋ_QM':vo0O fu^Oؒs:tW=7I[dUMGa#llykWDG{ JLJJg,Ly޾9/F[Wi /۸x3!vbs4@/e2eȝVV`h",}R7Ϗ6ϱ{1m\,7臛l1iȶ𿞧׿BBr@kdCNz~x$i,eB;1Ed'(c?^ThqU]], ;88M (%艗$C>1ڈc4u&=?"S~ͮ :vG+9n;ad4/CFy0rxsؔ] ~Nz1$AcǛ E* q*/qF@yӒhBLa6 ts߃= ?q>Ttk5:lӋ,ƕӼ(k+eG!2JKYVdZZQo.Ǯ%$ <|92~xkۇ;wǒӲ0 *y%**&*H)4WLi!89߻ieƒ˴;ؽKa<[SbY/C :Z5>J7Ǐ-:ՃAy"#F(#tx' ͱUj\纎ǝ𢬠W{VZ5JՋ_<`W%"nhGH< /5~pܾxYVҬ`.&Կٽ/Z<ֳJh|n_D݌vs{jAK?Yr>_#7uDHzJB8 vJcX2#`Z 2Tw [#>/$/$n??ص_wM#\/?5r>mF-o_~BRTRR "ߝ9 T*v Q?q {sQhc#0Ve\#R{8y=?<Fг~wy`De#OΖ pdG8צ@/ +ݗu{2eYv}"Wə=$1MvsӍ> j p"'Q(Q`Yѹx$@ŵF~Xld,:")2'+=ϙjH>ǬK+9}kz맣é8A׻b AP }q@o@a9O[&C ddhe ^2}VɄ` 8\aQFVEJ!!4ObHJ*}\JHkGg%4Qiڶ&Zi)4k~ J'b} +. 0T{F. qй)KM(\ŘNf噩}ẢHwι4y9w<TYN_[KhjWlT;&\ErG8}!o}fU7?D ࠏ%Zpzb"֡2.ci4JlR'\8x߅k"E wSTk_mUrP^[T"vmuJiYhTaCb'`-Tj@v1J K $':˾zfZ ۬7xAȒ rZR^Z+oVna1` ķeĭ]HGĩޣTd%B߾}!70]xՆ}￝ n@ {!q(u7LT]u,E8W%U\Ak#= `{kfǩ_ɈL)ePJYT<#$xz4FTE&#mmrX\EIxʹfYRVB,"jx^BN9ya\V8nK(Mf: TkU`"QVb@PY_E9ùoآr)oɸ3|`dS OdM$)jWD2SdPVd3& t-=zlg.t)8Ek IBƨ})zd;-Lʕ&tr_~~^h8nM},g7CbPn*RHXqJ{FъITLP{CCƼòG;]2 c]3{>ܖN /n/75\~hO ׫\T晐(ӥg#: UyK}fB[?{ T4K%MS(t PNqWtE)X1Y[NDj üϺLx-TxG_i'75wTnҪT/=BSZ{l2:Vfe ě4,ge VgB9dnjEҰkkya.+wQ4tEKVX2MRBQ @^]25M#FHe0+55x^ U CE!^ '=C̗Z^ I0 xdav< Sr LJp$u/E h If;3&ÒǩYRĄlwRyY0׎4}nV@1urMszuCYcK#õ KBm$\^F1b]3Z>Ŏ5~"F Epo.ٱF8}_?/݅Ђq]J k[_>y UdXbtlP/v+ PqCv氬߶\a#_<,JcK9Dq)#Lgb6s$M?Uֆ3y>½6S>cKO-ʼey.N5~ x{N~k{?{pN7%vDeu[ ,+%'YЫnIw_2pjO-`5`Ck@)w0 8tq5/Bf6D(~Cr/Y|dAݕr9_)|Z2=FrPk+1F UYf`-  Iyv^LhkY]2]6V8JF 6ʩhz%vV=N(~$G̨\`Da(#' LQ/jvHaWQ.PĒ0[tCg Efdʆexr qIHpVZϿN1G`B]ğ@,;$5Ӊw|lkU^C靗rԩ3Ŀ+>XBIؒnClNGфZrjk*RJF?{W8P`h}44XgvVd]7æ$RJ23)P.*f##3/\taU< dDmy@D$rG-jD_g"5kdxtƉ{x^X~u[P|ѰHd3Mz4VAgZ?qjD Q,ZG7}AawēO@<݌@QCI D۠{ Ά<\IuyQMfyϽA4".LL_ [kv+Zf @')ƽHOOPѷK &ĔYˣ? Ut.=Ķ f-oP Uc+jC!6hIԬq/Ȧo4@ҩ+Y&)ߝZ 7U@ ݧxnp)2W23WcP;SIǎm`{q1M H3B,"x*>~ãx9J;z<~ 4Z񓷔d 2T$CP?/_~e2~. DL} BqiCP捙:x C-_4`C&?P'rQ% |46"@E_d(p<_B8R?a{\>#FoyiP[LB" pe# xg f!?K&q|3 2/`Zvf.: |2ڛ旹MQ{DrA93@ލZeҺXTˀ P^'o3#HIg28D{8(uvd͓YLBBe$~yNMdri ؂Pv p2XI$ =Ocj$H`qfK^6~2/EDr=1"\P8gy3` R_klkqx2LNyq˫/#JcRQdQ)SnvԺ 4LU/ en@tD,[G2)l݋"Fɢ-.BPǏ*oFlpF]p'8f<ڋn 7bu俜kb@;BL?-/&bؼ͇0C2cX^`cdO UbL(BދxFT ,,NkMhng,3B lxx[ׇI,c#ⰨMʹxo96%`@3Zms ^<\lM[GXN7_k 8}&+NoF-&&3",ޙvQJ ox4V̘r*<9ɻ SM_kL\r~)2 3 vs,ͣlNT}A$cIsc|&EAьp'aʻZ$PW~ѱ Lq/!ϯF@松F"r`e,%zJD''6' Jȩ9&r Ay ԥ$,&1%bg,d#7Aī + \\nj8 j9En wd{;)}":ZYA >׳EeoU7ienU>5nᣐ"gI1iiZп5k WɆґlH,:55k VQ8xaF8bYs=J{45fG9\K,5^a*Q#gl`fVG~Ӛ$αj+AU:&bs]p{-l I!S:SlCLyC?TFA<Pe6A PǒWơ(VIĠ>HM,_o"U7b&PfBdxI Pj,\!0j<Y=pf&} hH%`;}g"e=õZpׇ [bg\4VyiZ}r xwA2iiY@PZjn*IIn>s ]lA=Ԭ1P5o43ݕ*"@:hNqrVKpz5TU<8B-v4 ihԁhW0[ ,-Ԭ1, uփ̲Zmٍe`=hA%Hp7eU̓R;+@ dazS$EL]2=|c/G2 3ye+%HId3GAXY@@&()$lai ŴGDt:zΕ@#\C/P1-o WhɧfC xDMG1&2Oꆠs)RRƌw>:9oSV-q* X o֑ {-}2{hL頛!4Ka5bH?M̏](S0Wܔ6Q([)uF2Hؖ}eOe! B+[ r1,ePr;"Om#e9܋[I*\\rPޅ;H|{WI7Sl#kf<&!a豗ըaP >]!weQjw${L'3 {;3 _ZV͠:F }|ٷNp \[/qdXb]jmesb&KBraQ\nr~(S-VE>ʿQFqJ,#j;M/",s#%e. LyK? H@1q͚<0("m>s%%3\krep NQ#l $z ،)]V<幌AgNNՎA{#OH)d3e0"(_&1UW T&:brV.5ȋCy6M:HGD9b]wtB"wͽ@$ tL H 0 @]nMY[IkQ:[O.s] ?C"m>(_Niͽ{찦3m.#$m$@?wLpPAL @6Zwe3 wQX?D!<Q8}lI^~SO"C}`#/vd1RϾ}e VeEoifؘ)"f8Nާ4]~DM&68\^PZT-R*$"q=sHsLY:q 1y`s6|_~{7#U>I@Q|i87D!hWi0]Q>:QiT>ɂc4tlDfoEz^B[jĞ5:\*^o8f X2ĢMgcä́n.W"Ѩ{;7N~{?$jLӓhz)P{p"Gi6 ;PrG sF gjihGLZmnS;2{5~Y؞t '~lŋyv.}s&h@4wbEw7 `@RWN'RҎ̀IdTBC ь! p^uJ\)"ʋswJS4`A)DձdI.LuT0@X 9xL @7! >@&y%6zBm޳nR+?rSL"R\0xYKQ(Q(dۈ_Pk@8ĩE—(_$~%SYI' ŎlW_"aT]t@"V^U+'RzV2xE7Z,'$>SL ?9OEs%!WU_<ԟEB_cw =DIGn$$tgϠP"|=cO8P!Q=H]dU xdv@ "?c]a^$K@ǜ)]p[YI۹9CwE@b7!SCY\8V_iYCĀb@I;JƇ`.B@cҝ(ln`7#Sߟ",0 @=Olg{ (թ-pfb9+OnI8{J(Ys5CvoED}Ťhv0BcqQF\rRuG''{L e|q|1Q n,AZ3V!rXo"~3`RDI;_`3 @/[<{6 $mz' +v )}>;ZYQ7zf! ЗbXz1O_BYh d:N\/-5BҵH$豚5p Ya25VcT;q9BZ3k.?{׶6_573[Sq>j/f3U{;[bI%rタ$ԁ(D%ՕD?χW FV:oB$jz s*YQ=nVX/ƠWp>7 W%׽=C^I~dqo06Ӥ#H_Ӿ+b. X+f{BQIA%I5c0ML*a@!(tcm&U6_fQx1zC'j.KPH_LQfo^贗?$=[Ǟ!)nXp C̋U 2!>M'OP̳LxSKзAh̠r*3"opѴ="U쑨%4.y~5 {/q1+@=SM ,pp!T+"l0ži+1%N8/X:p,D#)(!l6W̯qhZD U!{{]Hܹ ;w!q箘S p=2Bf֯IixY_jÒ3 &DeKKMwqi|1%4:3 +Sڈ7g &D Y̞m0!PjSY.S%TALYAݝԮd4FP#,!Q jipW 摐~Ys/YA߯vOIi^ARe5@$)1꥘lg .xeb*jWލ"j8[7K%-ܓ_T%,ږӰu)vNEPf[&/OR$qꔉ 3 H휹g K*iZ:a-hCKE!砶."H};N`mbƶs!!i^!YGqH Ξlu+(H?SժT^T@x]!2{5WB `SF}T<5`4.-oμn˾|y8vn_{cP]KC+DN )& LSeţH.pnI虌B&v-Jq;4rϠ"TRc V;H(VeX*W̽ᵈ] ]ѡRjBI|e!&5Bm@ 0!W|V-:޵[q f2µ_ 羭_vcu^ˍ-ff%d?ȭIX+nEa~:=hBJҌ[/jJ#g0zZ/6<Ĝ[n8MS~XAp+n%nb+sGǯUu˹ f %R)"8NkǚdbPRJ-Y Jbu۫fJ0] (Z4f|&ZU( rtz N%)Հr š ƥj=>N @ƐܙK}zvֶ2PhCC׮E!)ȔgrTb+5;򌧆ϋW3vd˔aSøMQ;:(b%nP#D)&%ae 0a +Y1n FA&7BTuy%Mz Z+INe2c|;t;yLfp A^tUE+pP0ea]vSЫgQ܎N[dٽ,$#I<+55zڃPH Èʫh%4^լRD&VzSb=8؇,/qQa1j1Q(lo|#@6@ZLSSnbm_$anfc;,BvUNPjS|mP L/~kUp 2[ 2"-󳽺<8ѐb׳%^%5|̘y\O+ CWBgt֚J4`[c{znLLKK?i(<0T[wZ'4(,ZV/Zxo18wFx QP*U2׵>^Ee֪y njm uM3sEn8 *$JC촷7 7@CIdN #fUƒ%< ^k ǐ΅gn{=™ J[VY/5F.C Y,ɑKht7[a$xlPsMB4 ݔF-;Li贱6P+0<<>dlOqZa[5# ٸ%^ "XjȌf e&e@Rp*~56|o^P_LOG9dUZFKht-jd\#%tQq"ZИwQ=fʲN\끉9{Ӱ ^'87H$6pxKLe@JSH{N@F49}+8WBr;Q) w9 Ң?uck#*%euNg$[ƀ=Q #G:oFotM*aGElȌP99oAc7yQB3,y*}Q`FP={Z ζ5Yit+or5l}]xbUqE Kh\cPrxBcĘŃ/鈾 ŷ(~B' 7Cxv4?Mk-zfMfK_R 6L[,)I;Ƥ=FDd5 TRY.MUi{1zb-|}rNh *oQ Σ3@>-ɭg࿼& up!Mj 8k4tBefқmkoϟ&Hϫz^{a_ĺɇY(zCl!0zMUbyrA6CzDlMbdrZSBr*ڜ:8K /z!!yVqf͹`y+, MƆ.Hֳ%:^?s f[ݠ6gΩ$LO?#{np@'b 7̌sKi_~JWD'NP(lb"}8`o6)hsg}*h}z6M/~N_6xD/x6O:;?Mz?{zi6OoG m&K,{r?}Ţ;QPu oߗo.dsWN|K|,?e^w_R,RŝV.qF)Jz_{O #RJD: .$%ZB SN)X0T@l*-v~7cIx^.KI*\lfԌ9ќ!AtM` ڛ*9\i.3efpbP x B " $-ۢ~:pIϐA%aSPp@=Gj@c]S{lX94J;xk{g)k")ÏX S(g֟G/~"φXi6C=hS(JuԊ"z/W8V纁t'.gy2t(z&8Eيӵ6H8c*˷ݘꌯ9dP1O.<\\ 2~>W&Jup*%[Q+SCF'RY%z"3fw e&;!\ ɖ\N*÷~1hA2nC͹GUzDӬֹȂ0:FF]RMSPTƕj~qztxa=cv}٨1K5.vˍ;Ш欺3n0;JsRmDLVU?]BeԔa`Ox2iJht-2z>Q7iku'Oc>B3~䳕ewQd~Q_RQeQa+R^&X* R  P:!4iπxkE]) 8z&\(ES54vJ`.7$2ERiH@ V PR1AɊ4nMwat.^W%a:ۮגn%-F4w(锳#H޲EqڹojCոZ<:>,KYmjO.$3x S 3[4qUpW΢\;>qһ+pj]g{u;s+O]e K@ygİ@h/3`wq$ w,{NC;e|_q!cI㾥 }FVFJt#mMaL 9fRge^G ka8 BE(yyH&Ӯ?v7tI (w8hJt6Ӡh g<Cn IH5V3mRyI;+GrzPm[T]ܹTPuzkQcAv$4:ԤKҤ}~ݨrE+F<."yjU$'T]e|O;u]v|#ϋ1+4ygQEʀKXjQ33;N,ZDt_ d8U)W;JQXyE3IN]Э@ *:-\kc]@9V=oCgS L_n,pfSqo#4,; ]* m8WPd FƂ[5v>Ã.γ_&3xf_9tAljZl*1u54v1{Dj^=㤯)' gG6sprTkh(@/35=_bKzX.|plB]CfbX \݋wA-I_ obװ6`y/6ףxJ'5`?]kJa̗ܥ`@8:|+iU9~=Z3GeM% `: lr#Yˠ&GT"LPg ~ o44B? O?}L&.sE⾪X^> } Lokf*6@WEнwӣqn^{0 WS`G3NG83ȽE,|r >0L;w =IJ{E'jhq6P6ҙXW|rqQ0TN7XFm>jʌŷ+?!RF:|Ⰷ_e:\ir $?Wy)867)@W@T~4k)uB?N&hٳݞ9ч,54^jTt})cQ˭Æ'DNO^xN`ΜH bbK\@Ak Aˆ7,GE \L30hրF$)-7F;yvuB%\/}&Vr/J|s~~VMM1qL0eTB%,./_Qҫ ؜~-l1Vdž=[c-:9!\!%dˇʜ+Bam #{!!2$VV7EKOpB17)b*GrH GLTma-T[:7"1ǑPH^dqT u>zK{4P :6Nf{,ڈRm e@K>xZuX-Ǘٚ=9ϓn$%ث WHN9mSAߦ  g㸿8`ϓ0ihR]s0AO" *1̆)RF$ *]_{{w2$F{w Z )8u?r Fg m+fVwl$8G1~?ܵ,}WrO@ߖHw"ޒK^ /6>wKipJR<‚⑀J`<8֧^yoh2+ǭzniL`lRܲ$N H\[pߔf? K[*)6|ƌZC)i 3 ĉ*xT;*D]riHöȬ0BN|X2n9hmrS̷ҚGQ~QзVJo}(G4[XZ!eD<$`Y'd@54vV!HN'ZQBkjմ[,7jؒ{iJzPrE ~ d.Pk G"^`ZiR33z2/q'oڨ~FAIAkd KF{,]mɋvFgrI.1wq&`ç"/YY]UwluyC<V􄱋\y*' D'U+eǏ0oVR `&4'3F<p,a@i-y.뭁?5Qh-p; N 3*h4Jq  6pCn[P.+=KH8Otd L TLWpB/Q!ƈ;a^𕙱ꄕQ)$X*igf((ԩQVp1m?ng|A![A΍CMFB,oj0r\KG?Ki~N9M/ .+UXA =~:o"3sJ%1d%5yH͹iک{0d^P G;F] 'dr^V↓ÎiF8#5zտy@qJtDUKWz*'3<0h,t5ƞٻ5_)!#aGZᣝk;aɟ+qhr=CP1Rutm輣 2QGj& qo5̈pN?Rcg=S\;Im./%> 4tT)gtSX\'1n%_|T-(|9]5 Y1 }>!ljHx7~2l*1Yeq` unq=q<̠=Ɩa6ړP9 <[>ǡuO{Q=.|'Dۼs$Y=b]cn!tmpgb;7u"}o LfbӾ(\G'v2vZ3 v owټ Dpffw#Kc4ӫzm>oEIۆ-4.%;Rv}sy:b[%Y9N?7RΝUgůW32]LwS敖Z59:2-oz:"uQC*UTQiէ?3phBe:<)hRX3y|Fo[wM=n:[|XIJd bKj>Fx)0}9d9_7FÎNe]}M`&N0d tKˊ?ft} mpjd MdEҁ_g/EN/?&] ᾲ/n.~_nFɘg7w4h垫V#WXKޭ&Y'..BlO/Q˿/>Z]Esv-|>=n=gocJ~CY0/_|邿 ILd4ės*Be.ǘt,m78}L;7uK AQ)kfg|MZXQ7ȵVBbp5z^؛W*ޗh)iP*PLՉafel]_t]PP/bt[]7X !ZHNQ1TQp"JNV '@ )Ȃ/2 ](;iӋE}}X‚KRo>})[Q[%^ ,gF0\!%c A('VN0'udwnpץeU*L p^'.@؜8(:7qYħWq1N>;REQ& ./16G 6qJl}C dєIhk 毯 w2\j-"ULrS+Iܿ 0-kˠ8H,Nٚ1.`2lLl597S6eԖ Ƀ+(Ss޵{WPQCQXRmyCs}m–}RPɺRᖜ3{\'t3ش@5a ~){Y浛eW+Y$v¿ 3c0?Xuo a>@<Ԍ k}Ρn⚅CVz7>Pds?ޠ]R%{5Qtrr9zԎEQP^ U&F= ʢF "ԓ;y(I1+=)ːsTVQV` 1L7 kVb $U|&iL>z2)kf(S1v`'MV[j#f`{@|łe8B"N $( #|.F\kD7Kqzߍ8MbFV%SH8F~~3S؅|DVv :Ll~PzK& r!3$و FS"7cզh6jY'_o"QVM׹0ê_HA푮>XV6# ߊ]-Axe0s @Oapb8ؘr 9COR:wj/E;iD6`kF# i^BLl%1,yu15PV}{9\p~en`{{ Q4oҜ#ctk96ϋӆ@zΑsd>~m1۹W๽OW,IqgWv[/&Yd!rÝ<1"!x8.t S"Xo|#fba» {:n<6Cf)'cGJގ^Ȉx.zێ >=$*rg,R3pj 1j<3U NG!BCoUZqo/n`kEt>܌y2`tXr,|Hs kw*_ l;h>$;#PEDee>|m5k¯>{b\&@E9Ç3|[ hB9}s{'~ڽ:<7`̃qy0klս ۤtV&ש7Q1֦e=wpFN˶ Z>DUFrv>uaϙ2;֕$cPHKUM44=AA='.=YUH'sGpBP$aƪֹuP(1͐B_A&l "K+d y9\&|(4|=ظl۰A+zf> y uq=Ld6ެA<>>hh (l+R*7RWIS#/FH!PeeF1TU$t %cM+EF]3kBb^qU뿾}UoOK?Ӈ.Gjv[]Ewom`E;ͫA[jGzF9uUV_ttmIqTz|*6{9aXq*Kb5sS>1lۖd"`ƓR-`UJYḞXPrn3 \Vom̨7RqK%L+D"!*!J RT S*fV QGSAM(uQOƇ,W(U.;*U5/glA(,\hQї3)4'{i.;ϗ3mG=_| FOmGAّuϗ3 r~9k ^QXd^p/ϗ3˙9M B[B tX aqAw1=x%@(I TZNoI1 ~%֋.DEJck6Εo^\|s.ӞťNXŻ6,MN륑K$Ny>r{e뛍v}XZs Zsdb..o! rdrj!!j #J܋\AG͈)VK|1f7p_zjĕڸ=at5}~(S nnL~{c|a̜ l*㇫/?+o7YWl{O;5NJ!g[%-$O?ɨ$3\d.hd"c<ҪKYN= b rRI &Bnlh̤`hm0SכjhY-d~iv?`#blnҿ dKњǪj"04:YP[MW*j eƖ58mӅl' &8Ƨ4@ `C`.RYVUӨ)q]CL74FkjN Iּ %;95Fvx+fD!O&~).F"Tt S:ܿK/kѨB36) , ²N?PoHfoD7{#ވ,(vs_⯥Jw1Z~e4dG,a5{1@u fF3(FjQ2Z b-P:V!hO@?eV5-̢[?Rw5q'w }.o>a۳ !\ dxן^]r+(%ùW%zh`܉ eI^cZhL"h높 sRd7(XT Fx/zv}]\ o:̼ӵmk*v߻=K9ߍ1Kk\76vߴcwHip;ƴ!.jRLp[%qf̹4h" PDB1D1R<XD> fhXtCۥp=H4̴##E S15 xDŽBӏ;^wfWgLKx ˴ ӜNks.J`W62FlCp532T5jCQ;^i[򼜦NW7 ,杋e!:OzD@U3$]\a01!2rBjn| YeN]ĭuhV32+i@-zN93,`!fsדrr|c y.Bfӱ͘'BG3"&`9œ SJ?2/8i/?7f6kJrB`fP*h5 E UZJ-0 5F_!}ơj[#l19VBXȧC{3E4ѥ)Bӂ `XWfh; z4\ F `9jFR o"y'ԏ8Sy[џt߽VmK{/D,H.WߍYRd3YϭjNX-vap\pd1{CR- ZL"iMB;bRS} ʈ}ղ4AЩjAš ~MNow'㯙 &ʢV@7\gep gu= ۿYz1,IQ_qe#CZnS5& ;uUdUfboed(&eC$քCl&))ʐ{%SeB Q˜~yjT)QۏY$.)nXlRn3,. }F}~ѐbÔN9^l&r11DG1\NGb°"EjD1M:'x(ϜɛZpk~ b:;4(Ч}儸RٽoE'HI4ΪPlBqCMO^aU X5V-i3 4o II~L 󼘦Rg xJ_5RO~{;S$Ui:㛽yvfkm#Yk2޾ԭ HyA_me"]o~})CԐ^Y땩VkTr]~&fkQS JnkuZ4,zѶ9%?(ha=D[y>A4Atc*{-Hvr$EniRQݲ}[U/wrmT7B\])^<ά;NAũV +F6&1zZM>4:AXR6_joػ1v=et 2*hP_2ե:\lqS-͜7%6ܜ6Ĝ`|KWOq\ԕ`G.3ۧy+ԕطA1ٿhÑv!8̵r }ތAF0[7rG}෉2ԬK}WS RՍiN(3V W֢|c,wY \۾skXMqs+Tx߫qm"3XrSXcsexcYb̐y?sQ ;V徤gkTGǃגܞrI^NV/(o}c=cՠ ;Zrc lΚ ~]g6W8[y6g (MAOdW;AN(]/Eldy#J)ϻteLzȳ9߫m7!kǪܽ'ޯmΉwXX=+}(+}T2i-7}SF@mqPrה$}8o\ϣpmʐ0hx')2D<pK;XH)Sd=0UmrܔBt+"ֱ Q̌B] /?yF*+!C=swnoq8iJ*̫gcJBoK:aP}::q/9ٱBƜV^y]n뮲guAAM; 7#'r^.yn?oG?.t,?ï]7ZO^eWW}|{?~[riGʹ]]w,8jvR$bmER 5 G nf^=Ѳ r\/ʧ}c@C HM]Ͼ6`Ls;@[-b S TVзaeK9G[qӤv@/an'1l~.nvςC {"ԮWy̩8`SVpv 7\ 9.#2$jG 'nUxsv }ygNتDfGX &x 7pd_sB Ș KP/rКfK %4`Ywc*C@&&P ]Pj>$%dgU.oN#W5Uvwrpn0@y;c,~ APCFS-"X4"V%V*|ĕn ~!KO=a`sluMUZR0&l5EM SCY6;9tHgA75Ȭ2'ޭ*EFYvJ(B#YBkNHv*{⦓PKgR6ՓsFƒ-)&% "ƇHև'2Z+ zCzD*QUĶpD듑[YEpT~^ L۽ sۓvМZ :.' ̉{0#;Ɲnyf:![/7bpqyc R|$9BȪƎS"pf<5)¬$d*V5.(K(jbB`Bl<+FB2%K,Q)tFAhEYsU}RgC$N*G<2޺JIrQaI/D͐Л1A<+NnC$\BAMOWVrW$t;o 0;yY^]uKhVxT d]X\W9HMAnYVO'Ue EY>! VweyV'8CB4H" vW'8Z-$쪎cTCIˡ|8R:r.lfjJZF?T1L;U6ݭcUp0:mSĬI Kb5L>yqer{H3.ɾ윘w l|Rh&eo8O9P:8] M){M^6$(Ϧ(15f VE Y00?]ܜKj`j̉Y }J1BCŨtdcn%A4%æe"'FEg,%@u汥9VyO$fŸg! CI }Mג=Y] NU,$yȳ~QU04|_uP"J3\'b)v4%AE3R|h0hCq l+VuXUogEv[y\B<>c=`YKJ.ﴡ@ V @#Ʊ.L muFӧ,ģYM uKlDUDfa8"܌n?XB#Pp8ݔ(ԭczMhKڃ$4>ؒIE9Hs(©x8uk)c cIɑL59@Fzc2."[N%{hIz<s $>DF]kϪ5$KPCHgoME)6 9*A*PCȱZunA#J&(/.VkCΑU`fKQdyvaC"NB)yN0 V0 ST`ܢ/92G2RRj3=)=)rMsސ%u4,;KWЗG s«oͱ?:k;E AL*Ӟ2sR'NP0*o*UGo:PjƭvU,ȣ3t=u7OIw44o(Dk;Ĝhlp(YbODGni٪X`7zZ?|zjz{%-il]=2c#O}vt~7,<XNp;T}t r3dq?g5=gĻsdO#>m3xrc0o-ȉ v#O?&Y2mH.d<:MT , Rrɴفo"d6 ȴ}c~TZoz!iBW̵W`Ӡl,u<탩L/wG; kAnCf8 `4kGǔqި55'3`!sT'G̽2>^g,tKf15[y6Q3q+Xyf.=Zւ3X/vN0%lD7-n/w[E~/7'紖YEwߍx6$Yzѿ9ڄމqbJl/Yx Q3h؜YjCއl=3To"xֽ ' VۙălT&}Jİ믇<eָS>!Ĝy9T_}c {mym߫q;f/=gEm#̖`؍@0*i %۳Hj"Udٌaͺe̓'"3#ȸ%ߍG$7-8qHg)[~ L[|B "VYdY>prYFG<8âFޱ8ёF԰~^eg,# O˰bK@y$9gvxHMBwuxG1ZHY;RhߢxRSL pQNz,3"WQ,$H9CrR/D1-dwI}A3nW-U`HGz"ɅxN5iuM v"Tx^&N ;v˩Ϧ"I~ag-7GsA"LEs-N]zmߥ A\Cv$-v>e^DGbꕙ7b6L8;ƧS}s?faO>ҹW/]Ygw}xu]~esUWgM`$֢FQ݁*OXtLT8t[Mrg:]c,-vR<zmPw@݁fѺ;_OJѷpPx. Tj|=[~j>z|F]u1> dɠ:Cq*1e>zimϳ;- [z]>>K `wu/ ӿ<D2,/'n#ΞJ*PKޱp+gg"/OkSz_}.CoMJ]4uYT'zgZ;^!ڀDm'qhـ&-ge&۪vuvquYgd-29ŦfMkibV`ơ+Jp՜0u˳\?d~٬N-cД1;>\msm:!X9oN]ޝk豺&)03Ydr h|̳%ǖuLB=w! bl l[Cٞ[*sHS=󦃒}`+ _Ѹ`E)1 ATAdĘ7U([9IRL>˾Vѕ-h%bE~v%梼=]a(ye>K9}Xfr1%&\eԟbEhˢvxSeI::1[n~{2Đ@ f}޹!-yk'Fi gcd,Z ߆gՔRx >2mw?nn9c $ld Ͽ5K]nxa=Ho$|a UX-c ,m({ {~3DXqVR}1@4;{Z#l-}8Z؀!R=bdyb;:ę/ڋw&T{>#:ѯIrnh<rc>mm k[Ϡ}6hqy/ z ɠՓ{F`t(zo nYA|[e.KέUm`|}=M~?Pf#\y*10I&*sg&ze5Aؠn YksA*[{lK3dy%өEC/1y͘e>h~S` p68k'ؿ]ʁ?. %K/uW#C qѯCGD@]_)TA#}#H7po}Ny^/`(Mwd-) pGaL مm-yǪ[dlp6{@rj淅zB'iv;.l= \Y FkN=z\ϝM{pmiނAB#yMxHkh GO7AЇ Y̮=y?ëL?A|vv-\&րΫUkE8ؐ]s7(iEtNިzue b7Y\p P: ܇2k" Fphva}sK{ꂀ ^fΗ[ 5 !0*#5fa>]3|0[/^M%!}`c2o޸vfF7ت{nK-y;H1Fbu{YA},=hɑ&?5|>]MH~%aru<}ď{_;V3Nɢ'*.1/Cc|m8~8aʵ/].G{q9srK<`%"C*c,m66"$U8jsE8r0ќDF s]cxb2tg1.c˩/դ]"5|;GsVb2͹m4_]v{.L0xH΁ІEΓ) ЧJUO}{5ʸ+RT}J^#Q\|T)R߂AmMvچaaPAk^v;=\;bKLr1A9uVPzRE9xby÷aOʝAVݹ9ZFt-Zlcgʝ[Rv.{-J睔y"x{-XW< 79ͨ.g!|9̀2oGt?ɇZԍuxZ mS+<&xnjC @dJp&7hR;5 (}/oZqRn'm2ٴT!';ذ p )1HxU7I5/ERI-Uw;0E+ӏIG-c1Y,5c .f ҅V93L neWpc:lX_ĜAF GɔP&n5q+ T%[hX1+w8!b`Gn<.[)k+>#wX DD,A"`IkPY{xA2ʹ><:Y3YHԢ6@sJ53W;7oCǔ* VXʳPyA%i.,%Hx5XD6D#RKbp@q`*c .{7( ZY[3IpIs.6HM; N#WVy 8$H8Yi}LÇ h/Le֒("*s(bSKn2gi>i%6ebT#6Ro wl=&i֎ }EzD`sj XIx1_ 4Xs9Uu uD%OO$A6d jmz`Σ"= sHB. 5Cs _~)gX]=emq=QgcERsH* sXl€0ˆɹ[%$XTpM̨ƕ0^!M1;Xs6Uk̢DXX!T&Rὕ s5dm̓\~To&\>U4%i_Wݫ G5_zHF;h++lT8`9غ^)~ 68<|9m3| YG&#[N0Acmr(!qvQF3u<d'`䯠ǖAv'%8Ş^dv oc XIS+ղwzx yFlXrNP+Nj/}e WV [/&BF  ~d޹<[jU[tnɀ03˶TS;c?ݏĊ W;?MЋT?Sw?̍aXW~nbZF\F",Uїr\ǰ|VQ$hQEs?%s%:I$誃/BOy 0])ӔWUSađI)LTJQXmŖ9gPʸa.JV?_SwF4d9qZTRvc`;$ $ IyfZخ`vc1TYEBvʙoCyIqWS2} q>0ש:\Bz!ny<<'Fy%p2ia{$K䵱4s;J|,)p慠R6S vb)bwB5ӢًoBPz䐄v ;&R5w颼&#^t6,n.:U8tW?#<= C{ڡ6֑ eŷffF)vt-3/thlm4 Ka6Xu9h8 @ +p°ydpLJ ? $r|llnF!#6jRӼF=%iEʼnQ$ F K*Sr KiX턍NU<5B&.FfI_# WLssH8_Ť+-VZB֌\R2hQ%SkN<9zK0/(WyiH}~LItr8~)y;t )/zII쁷w'KMeٝR \wB>o[\:]_'w}W쭢\g/e'ID8WL /=wmYq;JmnNB ?}p=,(,nj,%-&e$rCS7L\]^btW )t{=a)7 q|F2|adbĢ}LJrz?}8e(9D0XB vcm>ȴco$9q>e3ʷ@RMGf5ȳPqP)I6AM3Ds0P,t#UϝoSHc?,Wl~gcSrn7_dCj8I0A)3ᤛ 7zپFn=Qe=c*L;lLʀte܌LySQj,-7H /6*dд{S :/yeáĒ<1ՐI#J6i4)xaVyWW8Z I_. °nԶ!VJ '(ft^fo|]<[ JxpwA?_V;BwqɸA/ @p$^[Z7pFߙ?UEnn?nx -ÉRC1p鉼חbƔ jqjbat޿Ki= 4wǔVR Vkʡ l G,6jJ'jb~ɇՀϽ~5F܋+O'კGτߤ >6huvc\PNV +7`, V5.z#RabiPC -ׯno3<'IbՌG: t,/R S5jg qu||#&LY=q+aFԘRbUc$a C /. ҕ8HJX殁7. G_?P J4iL}q[4cQs")3` 8.g1,(miTDWY65yuġHɗk?޷ӈDkx Q՜IЂ}`9Ӡ1 諾ɿE =mq3ݏz吻ׄ= QyTi+nV%/ER֠v4y|FNOCzW1zYQudInbߧ-. FŔ*d@ Fb):yk {l];|M{Ugl_=7Yoz׺ɺk.]znwkFnjH ="} @qTLOӊ*%P/~԰.uf3.gД,^Ai@loP̢D^ Z2K?<\ h\PL(T~|<<&)"KwEsnC e`$?NjBx9-l,֬1e&7nrV(hc&۫76{+3PFT{CL+jf]x"% 9e'7uVofx34h׆P)YXKN#[Nif`|,3TZ?up/l)fa=xXyw..ˆyNu?^O2!  L=tcf|7إKy[>z!} wOW%ZqPcc':](g+DcGk^ پ:%@@ OfDRk/  cgQEmA6uۇ~rGnD&B%n-WMfOuj[9A5(z8W,/JiAyȑ:9* D>A(?)%(SEiwv;b@Jl1fZ{mak("bpCeaID`I dOE:Ь)Gb}2츧hMD[Ў[OS=G c nR/0HD_0bcKE <_-7yT@!imcl,~%Ta #E^}TJUK~}7ZL+ı59Iq` #hqzHP3Q_ɡ䱴 G} ьl \2T&pel(C[8- JΨ\*+4h-"Vhal](kdks5!u'V9 ݍXl9Sc~N 8 /UBC bu:h2'P:#ai!G# *,ux56hN9α9)WOǍQD9$$G0N(RJ)$#JD.wXJ9[ۏA-`rlF&)&"1*)0Ʋ4Y܁qe$0GpLX䜗9IՖ%j4ut}i>;Ng0|K=_K?i¬lmGttP<7)m"sX+sl7^?;͋Dپ黛E=uh623HhQK%p͘s˝r8tLkqJć;vޏ3- j5 dMH4);)4X{IwN rC|; Xw:v+fWbr xQXtr=` ]#.i&i~.մ[y0rbKV@پ%wGb$N>/0I(֧Zd"|g1$É6fk 54猽s猽s猽s猽˟3Qpf1u$& iTqa x$qkH(Q`jbXUU}xy~VXl8|) g<Эth2~~vg>czW ?* m 0\gT"WS8{AS8{QwcgJ^ؕ9#SV;#SV:#|=#v贋-޹"wH+z/ˆY & idUG $ 1ؐ1m 0b օUTIV}w(OVV$O][?]p'Z΄Աd( "G2bq%&A, +D r-quwbA b*C4@>X~BpVH*)tLR٣ yCS<,4qjX ea#PXਐ`UE6>V"6">lCpkV*`8m!]XUsU%ZvJN.|M%מח $ ]%Y-fD% Kȕ)NŒK8b4ǁdUPČ<tWi-ޝ toQ(xYiv ۈMDL㞔KqS i8/ܣXs\b5 f!8b'A ;*BK)B!U;뽰+cjvd5]OvJZbT^fzE7FU ws=\U]K0 T-e/SeӸv|O3w5ֹn?[.J(;owXÊ0ֳÞaWʄq-8WBi)"2R%G p1fiaWFjR Wbtd+VO%tҺ(:|z}Օۯv&ݕ`ָQ  +/E*ݝ-c-X^j8F48 ;uR,#Z}{(obq:QP aCJxˎu~c;#F:/]4tTgc{V;@ a攩GE9qiqdz ʰ|Qิ;M ZNaRD}kl Ŕuݺh)F*&ժ*"bi_+!ɝW^]߼rG6]Tj#Rm@v;ݰ2{VC*,la1=tveX=T(Va=ê->W.v)Drn"<`h2Eyxbן>mv$:zü{+ng7~vY_)U7c7:;xYB0lJfe}U vñHVFeၡҸ("BUc"ňp[w6HҴp/b~L9Kx"9~_ NNQߖltIÇ+C5`I݁cR `Kw6* B,@ :b *c)P}:1\Tr@1W jŒ m$SDL̝)}s8jw__n 3>F%{3m\ÓϞ>]uGpRU"b@*_w lqļD0V?\o<顥DOHmn7W hg7vp)R>qkmGiչ(*4Vq f'eg}g܌<<uǣŦ_3Wz[@c-Uf[y2 du,hv5sɣSe oٗej0@>hPBЯáq@`Z9[_KC[Ϥ.Rʮ1&qwzvx4^q8 Ev3짝aw>9kWv`M/5FOz v ' N+l`z~M57V{?xluq? C Yv3MtS-p* S{q'E\pM^u2ij/0yi}ݐe3dآE#8O$63q5fmRB7yx bn~Q|_,["J.53N񗸟}{e'0}ܙ&olan^'C_գmH[q&خqpFsFW۷G6ۋ'ޫ={VbښlO7<٪n_#]/Gկ)5K=3aQ?]>4޿fܕwPt8Iv,%9GhPd_' Ʉ_]b9~ }ӫѳ0s.ΎwY77X11-s+=XqYÝ`<ɻMRM  V},1 Bca%LUHyd*_0(ܬLmF`D]R̓42ia7Ǵ}IINR6KlI>$錳s߼ySڄtPJhL0>Je9wd +or*09ܟsK8U/`z:I和} `%0Uk2Zp 3PPOB~/e?lޓqjJ'+mx0.E$\^n*@1ڹ}rsE:ω?/_ rap3x;*mD(d-MBW?5' w[ʥ#:O*oU{zR0!Uy'0ޕ-*b윲Ew٢3%٢6[?[Cӿ}ݓ,G_p 4AJ"">$`\,;#=e4K5Q*I:)4m+LTGYfPD?W%=0:=jA 9;WzeWvF^`~^Y땵^-> k!?V% qAA$Bje(GT߇A )a0DU<1~@e?oT6ڒ-Y–Tӏ%[[%DAZShu%Ah *L0&?) :X6 }Jk0W*TkJk }E{%XF5HV,!,P.9^cK 3AwFBp-GOߞ,C$Ku=W1sB 7y3a>Y຾gf]a+\Y;*QxU.⪶6UG-`/^igp:]>* rE/5FYDiр:P(API֒Ơ U%qf\Iz+$'ʑTkA$Y.F`f 8ILH7Lai4KrMˑD3knFy4l V$Hz EF 7[։Fq3}6M||jQi {Eu9@ +@ɤVн)_x򨉮UʭH n,XppfT ܎'>: E[{z 7y^-=[ {~߳F7g7& cp{9Wsf,E'=WؿՈK7NRq8'_mN85Ӑ0w;NgG屟N''!TWqjQZ\ƩUGc/v>:wO..QΕj哳F])(*'MSlh|/};=.e뮛`#p־NI0*5BV;C)SV47Bb"dIT)RihES۫4A0k5Y"+agjq?^VA ΤQ$ V4|Nn4 &A8iAwP$N!(J^7áLr"F!ՙ̍BBK斯PGP^p ^wzZkkX!D݄jV^jʵ5ӪbX|]^qUCt'''SꀬY03iȚ3[%s5@VT,b+/4#9?50[+mw"Oi&B\wԛo zw 08nc: U󂇑:d;6pg;ˀ7(WI݊g }IPjC9 c1Lj.:q,U.V:RaDS95d6X E\yƀ]e%+6͖'4j'r>Gh`72O$RU1BFF%a'F&HGKOlJݔOȧAǕψQ{vYOW\VȽ\ףnpQ'eYcz l␢;{I 0Ir 4=_4d^c0{`3y'6ë$ 9|Iʼa{/gy`DOz=??3½Ҽ +(R)bEwEVw%^Kzy {-/*1DJz1AP "SjS?/4bw*^D5cxq1׌Z2Zӫq`wmwoOM~ۧ1~Npߧp gLҡRo/n:^( vKg׳ ߧJ2[Tߊ3) /2U󥝘ȟfcB o'qvi?0٪K;_/?&o GG?n\sɹ5^p3^@5N 2̥i,jt<*}ꛙûԒb;w~M{ٛ~ae;|- 4Nju>/O:N#߭\ L<woߞ9O_?|x=L?.eڛ|~5pr&g_0BKW{?dܘQ(\:.GtrzKW koRam;qI:ٰ}v pXwo TÏ|r%>ͦ2 tbQ_GzJq^n14ME_g,(\c;2̦ӕ6E|\mu ' ֐ʋq.29Ng.^4L0 KW‰;˴s.O,2²9$3,b]?v9s/ꑯ Bz,l Uqqdƚ\栄ۏ~q\`>v ڨI:qQ n|Uջ UBUPUXUo;L*ij$Rn]D BƱ`Lp#Lb<צ{&7mw;_۴0S5]}Q69b`Ǔaf6wưX4Sҩ%[_(a㉇ye1Jh)OP>m~`lQA&R!#$Fz"bC8qڨ& ̈M3'`ּ$p˥et炼EQa<YEx@" )tL_#؂Wu $ AP 7 .E,)2ra0 7Ȃ˒4g>_|+c#zp'b(xؘ( I^=~7BcnS޼63'}5i][tGtτ2$_ geӨ_99^gԸs4MvC qz[榎q=,zRf+k+5SQf5*q̭VX;$9Q k- {J]}oaIKh&t!a/AkD]W5bwɎ/o-AcOG(cNr'\5W6K'!-eo:*MAUZy ' B՞rQf5"{gs"?>r< fWc&o_炰 xlIqDA o5!͉Bi8JwOZI::]#+QP;[cF^:)OJ7^7OB nu/59S%R'߫\qjNuDLiBTG#x'c)i`ŢX|H%NC Yxɩ`ZJZ`޻!Szj`'+ ,Nh1 &%lu&$U0J\cLTsde0W>+1.t ,g9S~aNE-JbFlRF+-Gj+@uH\qO/wk\W!!ׅ7Z5O3mDm,X*"[qKVAx.o)fNjEA-Q.+) #$jEgRF1G"%7=S #F n, .0ɬ("TRՅjF B8BQȌ8V@ ICM޶+cP*~M`-yB<יa" [Sv0)M3901:1/peŮN7aDpUwɋ3} FތFu/$,".J,E'`S%I{n850F#xj.jbӧsnӝ*z7 }Q%;#%'KH];]~v+,C[q$Xi) TCvss>R(9lF.m7Khc͇ ޻}key;e\vRs] xX,v-pkOb %w7OXe_oܞ)J.s&Y^o^w&i/x 'RL`d& 1in|Y5p]suѭnG^Fi{Ea-qQr?x}(4F.hywVn敕 S:XH"9hd-.Ŭ}C ̲;$hc8ku{MC/(BגdjJSOj,p%BJMjՁE>U ,nj;2*J3+OsƐT@#oCf)/~;ld{E3\n>eϰd%ΈV=W64өBVޫo 9Ir@:6{q g,ʀZH Wv#J+4ʡY Ko+oy4rǩb&BPaB6ߓ3kAZJ8/ZiFNbViܿ|x7%WXVЧȋ{ #z KΊ_F 0oQJiR\%;6J[x*-]q:8d}mXyRG-Rf}?վUSt{݅, Y}7z-!0%ysi᳢F.6"cܞa4čH/#Hx =5U2q7+)}:UJа>:r-#]%PO(^)Go2Gkj{)G0MPZ),zfFT>ZɴQZR;taOK"0s`T[ڦ6\gRDxid.]3֒ cl2*$SQ\pIfIz(z P C4K5(tkou<1+agɚFgrG!ވTcOIk*MҠ"9{Jna>r6?#/HD Ȝ!}F#lÑ16$Vڄm u^V8xeNJxmb"+&|Kl ZǟnH{ig -h ƪ NG,FUa ^fB]enPBsPT ܗ`b,a-D"uaNQ+I!\kStZkHf"}i\Z>˖G=sV6N-;"& |ngYK}LР^eʠ/aZJJ`tf^,ai8o^&Aܬ KI.GlO Վ39*ux ;Vl~|wGWd_K]Zfc^U %H.g$dZDP٬FAYbwr= uT nh sa?t  ֍ X Kc TfbC--EJ EŇjxP\X!qNUi>TVxXFA#ŕ+VO/W"a`p/yŨ QڗӘ75{ ceϧw7oz!,M/rY 2D*8!3^:qiֈT&w/PO 7o%c8]PˆKJZȬSa&oiT쳦0"Ypf p`4fD s$f+̀,JͥщplHVHrs^~děkLty"*5)Rx8hW (zGf0_0/XRDq;)ڥf"3OOԹ2Uy <2c%Rb@U<ͥxz wapwyԴ3sY0 }S 6_XfOһB l?uf ހ0g΁:k>s([yiCmu 7\>k2l(4ы Ѡ)h٫%4@RnZ٩Vԭ26i8|8rA ab= E$MS9T| Ygp?, ͕-#puAs=k0qsOg;ue;6jEL+tv*HF hS(hpXBm\06vTKoyj_wF':ٮkaWodbY:•xKa܃'&/ Rje>'!{N4OfL1&*xP@jU@[Ny\WEE3KnRňGi4cJR̟y5r3B?xNã` Xs#:qh‹V8l`*ִYÆ$+di*"个WZD_:UO} oo~-xiY2svmۅXTq@Xo,6:İoÌZI%2 c(?KKp8X=rm=箖A8w'_/snS3D_~EdbBK;d!$^>9pZ )3U:AK =E ݮoX+(%ti1"6vbyj (n0]G6 N,Q:ԝ>K8`Jq%c oq֠וjq-ҫkL:=Ngm) ~FSc^ɘkS#f\g] մ: SMk{{{Sb;u7cU= rO? ݆,qճx=/XDz,֤R<=Xc$ݬ [39Iq%i,Mwq|qt;l? x߫FW1* GQΘAZZ(B,}$ It`^cð xavg֨z 6%m&G>jbIfmcx\H$V Sl*mY 0b.;4ƻl dsIRŜf쬕*0J.P.NFzl9lwbQB* [cL6zP!6#sּaG*+RUQ۹ȶyeuVJcZZ{` Fu9KW{Zs:+ElEE;t."a7)4Ҽ_щsrWx[=)e({J,HU]W91". 㚓c;J(5+rwu!7?ӻ޶6jܺdz4 =E 5㢆}i(/{' xtbc@:][؝kg(6)܁C E!uyui?̘G׫C>XMCls4mv4MN%V}łO{o?6\eI[v]՞!QX[)@QMUK5$4Y$H :FbcWoX#! BH/6z f*\7r;UF,WSj-Pj}r\k nRc^ -{,?6r3~J'@ma,vN@  iۗOZrǯkτ?}. hK J.!|ދF#O~pQT!= 9h7^dt{JXV< 4^2ڌ %WN9qUqz#xC)nIѥѠ \hO}iisq=>,_o/=Hc1B7a,!B姵%:5C ӟü[E#pPv>}aUHa];M50!'$\X33i*P0Pntcެ9X{sNIq33 Wl58X?Wi7TkH]NƮ^AE6KOpj0U|"yJ~\>*砼KL^i4iJ^n's%/Y+`d_|xzs":㒗 K}h'L^N VKD0(yqWxnS|Up8Kb{hPmpyp{o]fΨI7ow9go<-n%~-nmrς+?95v\̽K4Gӎ?ާOl-aw=ݿ>\xt)I"YF٘N!i9\B2J}% )ɲ&[B XS._Л$ͥ8|>a$t$y_hhI[q/A;^m<"EEԕ9RwgbSYrQe$NeYwxҌ-9n;'PZ)N3FYf;<~JF}B2OM_gb".*:o|:uiٜ)^(DIzPRo<ΆֵiL7EW3>A˻@RIT7]M&p^u#%6'y6n}7o cb_ Z1BB"Twyv웋ZQ,Cd8錴B)&1 8~ȇ T64Ҕ6E"yksx$FyȲNrs'톦t}ԎXVZ_ɬit Q9Z`E{ui&l'cB?o\Y~g\RdV́xI'k$$! xP樢/.BVB6-ym'\PB՘gv96_eyHt_{uڔu9'Ie!qE3Ș$EZy㲊EφGg,w^v;N瞻M2xw6޳(J gm[1,ەH&hf)aѲ=fc3(VRbWŰ6\#S,b42&tn|Xkg1b2e]sƑWPpWlRk985x(Iyǵ^(3ݿ~tSam%BJ!K+P"4CÅkUeZ\$*V21X?k$Vjt\ulCsIzcց¸B&~F&*c&8mD c  Uݐ#ФPMd%C3xM쉵7:T]ZE(sLT>`Vlf}0fc; Q}4@غpˤj!q3MXx,CYC]:Naee۪\h˓Xz|4Xï8Y$W{E]Vl|Hku;Q}*(N4#pڻxlFԢjK ΝFab1xBq^x%+5ܯU^x̼QӜ&S/Es8EVp?|_1`^hItDHL ghv/HvT0u_%X'JI V4OExDRfVi&RP&pq]RRޖҗ4E~qe%m^8WABЮ\(p] vnI0A-Jl56M0wiP^4iԊu;Z^͐vB?N` p1C*S>%zH-(ФS >?AI&OgTVqjk]Rg5`5Eˊ(b}j=4VWs&.lg3LO*-h:(IMe)b7W>L>ͱS?x)qrmΗҫCŧxr:>s77k_(R 6 g%$xkYjxJ|+)|GF@6k#$^e|ji^70v3n%?~t`#Ƌ{oI_|_~)[;ƛAKnW`` 7kqccQ,)9,wF2+303,SqXzNs/d2<t_uuk Lz_rZӛ{Px{߰,Ɓѿ@n*0 :[ )|SP+ ;cy͛*ur75҇ww~>17\CLyXO l~ ^q|[L݌GgC4Sʿ}%bG}yq*w}'6Z4\z}K`,9"_.ȯ_!e'ׯ$~ Pt`aAg()PF*[L#(bԦL7):KB?>NyyeZ} Yv[}Tb 54,Xtq Dv`2##W)04Y&hgRKC RFm !c$s-1!5ԭI$8dy_}@|J1m_6kȏFo_X*=|S[ UewMGѹ3X 8mȪWjC{݀1Mpʦ4a.s@%A ,߁2,WR  Q'pp,79K< ʍ5 ,[@ɞ/N:~ݝD+5QZ(DgvjWdKlƆXjUhxo߮8qVvFvV$ʶt"g2.Z2m̊2Yȵ(I-J[co,|5{oWϚ%b|xgÇ'辺NVznd=f*(`.f.|v6U,x;'JI_Љ*G?c¢6#.`[dxybB8&:rj6)ـ),#l4qE؟I`c~q?:fW=pJ *uSA9a{Lg=sx >CQJm/OzYÓX5ܷxጢ'y)NW0yzb iN(Gpg3b'@T;=BNLŤjl6Ze/Qm[>1`'bBBzĤELSe$&HSup$Pbt@< +/cd\7`Cn~;8ǯ: gtvš1'Z1,7m*̬[άUa73=lUr&%*Ɛvj EqbR4y($D[%)֣GCFo!(*$Tj8!?oֵs=k:7ƮUYS̄)uso)CSge<%pnЀdP,@JLO. \{x385*t]B ],[1Do=^,)rH ڭcGP> { ̤yXub : {vS d pZ p*u|ES$EqS4'›$r`sXFa1vW3`GPP-s 52$hzө[`ٱ[Bsٰ-lqt 2naBFDck&P[j$BHŭ+ݭW1!Nf=fL|(R] hucUσKK%(!2::˕a,,WCxg H ܖ\ 㵙ޯN+CKM(Tl֭ ,:z\}qOMr)({j2GN=F7{O۸~A} ts\4"i3${,J$nEсbQy:u" @3"OH|ARwB7Kl}2iVyboR? J*R5VO\if! _*V,R,*]f {_m濋-ͅ&y_t&v`Aƒ>kGưIM,(28 v!q| "(ZPDqD(P }XM8_mGM.q2E?L>[CF7ou!*9Y?Lˏ$fU9`|ʷݷݷݷQvJ͘dwZ>Km7Og D;AA\*N".h/Vfұw9e@_ |ϟFoɓ[%t"17a "}, '2"#1#c|8Q8$ DE;Qe&|sKbC/\._p5whڇ˝Gjv6.PkVM.لȷ G6\l1vuKEɓC܊QitlrnsM!YiJlZxJ 'pS$5v&2"V bD׾Ua#q pJ-ꄾ=[WEvU]c;S+9Cv3^5>ؑ|Y)y:udڙ2t 2\M$F |͛D|7_;C8B/G%?eENe. tYFRL;s"Gq>SDD`IfHǂcKs.I|rh@PHW1A€J0 Y f,TB!(ґ֡<ԑ & HW,%`h~VD`xHl̀'S5ɧ| ʥ(gRakXJ/C_a7PF/"d B ;.rZ)MSh2)xXoIAHCF[ P܍|B6Bdy1uR I A˸!aWQQ\rtxcF)1:ru܎4ϝodsK"w'8xMR[b=s^ ދtWZa uCeDȵB3r*^{ܿo>,[P0`#Ȁ^ @ű )6t,s̰4k1oM$!ʉd,hLA{ہTj#:TJ51U45J&JdXLTx7hl1&nM"x%2"c>\Y4owOCvS{ouf, 2.,mowc|Dy 2Ws09cd]AՒ_P&B+vı8|ըyP pT P5yPz՘@K9ls\ @[{FTr 2$[ll+ZUc+%]`e}2wŃWv;\{ gW^&yXCbKbS^nS?yט;߄[.6IU{GMdz *) mJè;γḓF!W tA]fGò`r2jʗ-arU<,V&o1V}~ā-,5&ogjt'%Ǻ;?.#S0fϲ-ek* ֻWO' m3}?)KO7pgn1Nkn9edZP]b {f?G /&lh Cs 6 _*[mM\]V4 >\ق_n>0O"Ik7YʭY0lBPcݔU iRU6^5Y>hxNR^eq݃3K|Jei!ҧ*!# 9r|m\?gU28ka~%k=:XF bԜ>sޞ lƯecbߍ&*h`Nc>m6Fk#-rIGC:ޢ)Z7/RrJIrkr"4R'%8b.eTNG3tBԺZڎZjtJ*daY)Kmu.%1rhޢALc!DF&1 Zh"P QxtCcCOB4"&%&ڝfy@Դ&t ~(zf_@/aM$YM@[~/S@-g|Cؚ臰5EJ#F{+jBϦ#F`^HJTֳ)!v.5XVu}?Y=&K{nf慎Q_-747V؎=]KI<$ vv1DM:) 3یFҜ6rs0#9s %8{&/3ӯOaZAb?I7Ydf}<kk)lSK廂fWx匜"EN4/V{jcd 9q> +?NB^1|gjSLKl5YLaNˁ&0t6~. (䇦_܏/9ll+[;v_Xl} pͭ3 9Z[jh (|@g mY>gbA&- LkJکS-v# h{1u ]R,!VKg+6#K E%ϢuNWeJ z٣g\^ڨ+c/0Uk>aJ\Iy}.N@WiF Ԋs_# _܏|ԟ 1L|}rTr`1Mh9Ѕ;c`s .8k 5{빯_-#BJ޼^_#K޼alhB;DQ]a7cLI7P8_9ٻ9ػ^4#դQierܡi;c#E:, 60_=YH3gg0}W^>hR.JQ؊/_;@(‡>U8d{lq`j:FN" ;o VʁoJj:F1*xUOí!7l5=M`؍_cY5&rf`c,.³lR"Y_|5f!eiz& (~A Npx.!+x@z侠x!/ *nE\9OMXC /$=lrOs2q;ꧏϒ'{`Qօ`/%ss.ڃ ;ƽP)qy5_9' 3)&a+"\z_Yî6yz[퀂\?ͽ}u. {,X0"7l7=B,H!oQo:=JiEľ.օ.Z 0Tqn"<}fz~.SJ:@1rTT drInrh:g!V #A(m!nd@C"JМ2GU+I+oz:B+-]Qg3 %{!b{|ʟ !*%T\~M,i}cA )Q˹zax؞Gn22tǗnsʈsA~SyOa$"ʴԇS>?yVgpجB/e)f*:\;N/2N5%H}.?$L=NBY>ʘYZv+u\>q{9\nUֺ zzI]Dg$qBòoՊ=*ک5Zp5@O\+un \I7W e`/ Br:Ll$A% 2vNVQ(1kY+6RWiU MpDAlXld *"=TL\uJ}`M.s쨐⊉h v"\ ȶTaGfpyM+ @vQRdc  q:Bl^<] u%wmG$K ((B8| Qd-9Ca FD֭_6_ ]6ڍ5n:@`dw{1f 5| !jJ#UY1 DbLRaiT3Άn:)6mARtޙ})ܑ#QLQw'?9J#^4( FPT?o45Hz掚)rr]7Yp=(vSk-fNfOZu D71.eP犬 Uoigq,nB j&9*[urhlSV_J]xMg<,p&Թ  XKh6+sφ MPˉέ}UvK )w] S9Lyc[t:rKζT+03Nm A&{Y|&ʹw~`Pp ptQB^B:i2n_k=B8R*p/I#**WH]`]!V!<ĠXB5O=!t@R(s TsD LS7x) _PRUH7f)gfZb'UcnKyb/2q2B'1i_Muk֭1;L1s _ ԇߎhH57m*q2S B/htkbpAjz73 ; 2C&ذfESiD[QakS$vt6I$#j6GQ*${!%:ժ?1є=-P?] 4̕,OL7ɼ=rYƒ^Fqh/_j[j5H]`w3S̴w\TaF;$hRB XH6`*߭>~ÏpV[fB*15$|kyސȥ: CЃꪧxp*Y@E Abx0A} .*zz˃kfy{ ҂.9E Hix s}x)BXJ.5K%ıG>@Ɖ6ȣNe`Z$]R}$bfӟr3<2oxP\rriWy=Ț򖓤ZT4Fb^m=%v)bqīJv <0aɧgZ6B]q(bm&}\:jdI#~"E.V(rf~3;*\I4kҒAP{b!G|_.Hs3r\]F,`w좪;qXu_n~z $!{K/ʪל$2faN0U=ja #kl~w}]G nbiw7[LBƌfEDža|b,^+kSjJRE ODb>Nm.;Φ~ƃS$$K;k\8f8$g T6`8[:XuB{3RX|b!ZvT}L٧Jh6K{b\M~MAy[e!=Tx&9B^Si~VlTxNNMu8SpKFp%.4mi L1 rNjђ1x]P8 G5o>*DVAc8glI'pV 0X'tU8 Tn+\a{)ua؛pRpDSlOxGQ/5e^|0J9!>nBNz i~4h0>p1ow~*ЕV+{sI}ccV|ճTЭ`++]:IL!8GsmR=xqڜ )W=W4Z߂:/_)\#Z(@iƁhB.ݫ~Afzmv]us"kEtnpG˒d.w4`B3聬G`/ERI^+JvybL?jp!t78a!8>}(T"Zf"?_vYZ<3F._%Q?݇ʱt>E;Ր90]ˊyocbϪ$xs}*3Xي9bU~Eu^54y7MHSSqx2}OY_=O9Ϝ\K?hIlZ`\thc\/Ε5J.)ӭD{:R0V |nn$;]P0E/6ٳQ0>dݺ|~c*wts)_{c3kep1w ɟm:;\A!7 zk\'uNIn@))P^ȣ -h4Ei{n6)W,pX߂ӡkyG~?o>% rfc?I`'4v! ߺ ߺm1$<!hbUD5MMLZ`us.pB"5M\3OlXp$ yq^7YѴ%PV'/ dmt6v='qLw{ᥟ{A/ZR'WVn)tw}$Dʏ*i&˓O>*=?֓$<*XP\=|(O0 ):H,| c<̵L[~y6oX4o^%v^ڈ?d}Nؾo.LsxUG,՟םR η?/K|s^vbL)5QLĜ\"IlX-s'J3EZ-Ɤ@k:TA Xn:_ P3St{0 xWYw$P >};JCaL`fO=U oY>} ϔ p }Hr*PLIۨIl6w_mK*N仌q :]d*ɂdV?=M)RQ4O~v~܁FKвfWhH?r#]E,8]*Q/(  S(_e+}{Xڷ| ԅvjpMyvvL6 "_XNWږ΅)HEp~M_ S!ZR>ثp-I!#wT*v69+-$=p,zO\v:!h.6)Ya[ j U1M0>,1OQ&pzG?Ir? *H')Nq@oom4~ތ!r@|3&bHeZ3%zo`o jo!?_ "m]&IjD }zj/oM/lZjk[=}IMRMX=<ۭz0{j5Gcެۼ =c9;7ڑU5A5J'='ooBH]-[Z-.$sZ ohtW_d '@FIRPCR"ltu \\})g0v'=E*TM]5.XbrF:TI9V+UvH$T>X*: Q0Ifmj47VDQ,\2!$&&X +$J[S /`(eY7:"MU*T6]>j5x_66+f^qRm~!NB J6TT.\hqO4[:5̥b/uql9)0}M3/q+t ~ȟeݍ~3a#Qr4J`VoEwkE([Ϣ#Rܢ85$6o81).aU$JXa%f{mW^\S.ܑ#EP:oɼ%nPeR (Aqb%;~|wxMUg`W_TBVU6f%+ Rx&)X0!0K5 DHcb˥XU\5P̛pz SPܔڒZKť5ip8RMe"-0da Q,U6Qr1 @ !tN<&IK9"a@qǒQ(N|"Eq2pIL4\\$Rg5`eRSo1bRU7ad]p?Dk»Kף_Uɕ':4 M*da  "Ib`ъb`φৃgacIpRH L; ps~*( ȆiI.DvjKhFQoeN0i~4}p22+BY7*VKol>Ow,nzp1` l ;ǀu71`gp7Jŀ%0*xb/3l8)iAh$p?uuzD1"@Wʬ  Rm7b³s8 ҖWݭpIFEɭ$A=xawf7M8F eŷ C,`0PCEGМ h∐.=F!P "9İk N6{n$֍?_DG=*QLreC'aJQ"!ᕓ9iDxвs^ HI?Ap|1P -!F[^ZL'a$ꢐ cuh130h:?5`QpS嶑r:U 6.K$5soIpHQ*F(cP07 H@8f/,uT$ Jg5r50m`W2J AJ(ꢤl72T0,KA3AōRodboq* Vǀtd0XfOUiCNY@b zG͐F5#s`HeB!ѴH 9*r*86S$ktq;Q`;̆[wPD9z8z ϷKظeC]Px-&L1Y4,8DPb@vn/X™yX0j1_mD=ំndFQǠ 卺CICK6? T: ў~@ ڟ+NTBps!Q1a)jҶyldx-6Qfxl~)c+܎4"Sckx;U돇i($Q85Ot=88q{=Be5vZWONnjLb+w]gkFOٓsc;;$2E 2r2B)CNP~fP{i82>Įd P B,׫/31\6|[i^TZfw[Qq"l Xj=нot颷dONn %S3@[2ɋ-sK./>mʯdpw0q_O%cwz1̝zq8sHiGCRH@$,$4VPȨ2;.:%LB ݕ0mD$%fFfZ ^c@)8R qj@hUPè CCjÌ8&9VfhdL?4I4З+4AL.KJ@X*; ׈3Z K$t2V0ƨpqv7ZO_MJ?` sqBgyRG.`C@ڸXw2Qo سwib]ެL$JxQJ:ݣ>/,ŭ($Eا4(yRPр$G ZRY4oR QSbR„p#"&X)YpX ILƜ3OKI'QH)$c$R"TC%H&:̖:KړeSΩ-2kQO;Bw/M60%x"c%a4ә%ic-eJjݘԌ#<&.Tl@nS&f3gEõz8w_ ꝮEysxS/2sV0F܅ճб *d~w, K?;*_ޙOnneud P!9]ƌ DNVFYk䳄S0 <uΡg|'w Ƙ.ﴺaBz|Z`Fp~˭zcY9ʒ6* FD'%*0OHD8T "ĆF$%'mhvO}4ƌ@gH$9LyOukq J*`+VUPaɶ1m^01<͟s|({-m~& @ļbر xi*4D絕XIr{0bnv6J8 ; Dw6ďC"`0n{6_NQ3$|&3j6vI(pu]{r}ܓ5ZƯLhKœHf9u #Bq_RehF)XO4V|%:mRAi$m/N&AJZl`[~HP?7 M{;i09FL ǟN4!8GMc8pv !sX/M='J$!I:M>%!K:>3LqHaN ,itm I<ŸFw/F"1匆xYe4 )xǵ-[kknj=ܟV?ڷyΒݫ7V뙾4}QcUb qE^> -#'<Sɘ%ɟ21с2JT{-< ,L; {HXCjb ļU+Q'B,ajuXa7,*K#)ŔK˔r cJŴ Cz'XjbFUg8jdE|i,fVVe'e[` NX0*77>YݷǺu(bQU/y>w|WC[:o?~ xןcpX=/ 9ޜ|ܘgE gukXmv;[\nnoumUXZ>yi6L5SYuwB~,rXJC[Tjѩu0uЩ'`L(S*AX[6w tɱ$8fvيzgΖWg2wQUD+Z|V"ۍ] Jts׬6gfkkc_l]'f@ X|ݝuo uWi^,%JO*#chW'w~fy:ҐL5IP) blt>@'3It%؆2f&-m^-ӭr7 ƣn`|/]2=ib&'FViqk($Q&FҶg+O'j(l׳O,iג'd>z޽4 #ABAsYܓc;drc4=VbgOnD2vofZPfjD>DP RF?2fvgVkkr7_TE!z>,a{6ĝ^9U L`J3%*S:z!c@A6aBA -Z)䴓L'APq,sVbC#Ԩ"YVM(|-4 2 F͋WVp WĒ0ҙY )uTt#hZ(  (ϵ VA53 +9f%/GM?oARr`2˰P[">wJtk^w3{UVmcaXa?J @k~;4f Cc XR80NRCpȂ0: (teTr my߬YE"2`d)H%)wb I:K9Ȃ9f}i ՟W9uszI >=6JV#^)em!J է'zf C5,9h + Prjo0|RVC>2H9\)Y߸|f 4LT WypEt|Ð*Vjkƴ|09V)xEǯ> (.[x$!oiJ?,IP#R Y_e'Oi<*D~0ʗ9~ 4#W'iQ{Iu2ƕ4KGJ);. R]]r wsEYb{|?<f,j!e͊ICNx cN7;]h@ǮF== XDE=1KQr[#R"[Ӄx6&$c1ONk{ rI1Gg; QKXφ[\0.j|V~wO&ekCzL IDKX%?j0˶/(쎍l4ea(+@ -qG:L'a: I@cWOhnAh(CgL`X"F%1J~h ~Q`~ˁBC(;, A7P%^; nC 2x9gPK̄jۡp>mQx{\[mX<ųі-]pC4lU !qkHjvQ s$&De4BA5GazjϏl0LUP ޅBf/Pm4!t(«= Ä`}5yGa1DB(\â6eD-ٺ̢(xYuPPK,UƕqP#m;Y(>r`1VqMJ]B PQh0H19dɌ0dɌK%/aiپ!^xo'd{ xw"J30%;8d|Q)KwqR.?0K(^EE"A"W?L)l=-P`-C` zt)a^Ž!)G2/D/?DO9* QN <|/xl:< ApfUp;TG-Ko\>d7eA25džB-d & 18TnD - V?CAt7[*>?/1Upd~Lڇo'n<~}v~0 ߮LcÖil2 [iÖFDŽz -!7R:Dec@bU$sO֛ LDh?9bS߁?ލ>f7Uz}g\лQl|7gnqю\Q-zr-^L3v3%Gg>U$l' |!^#ڝO^cϝg}R?J/ K$7蚪N cG\xFը/F2$;n5Mg, Ϭ𽹹zx.>*fykmFm49}Y̔Y5Pyů},K'Vw@ފ'3|93ޕNjV䘒[yegu4xIP6M}ILӽsu;mj'u֣{la+YTSV=EWjΕ2u~9nmGn˩A:%eΐ> bU/Dev @y8-HYt4JB!-3h•EFa|9:0 bz )8a9JI3  M 6iiii*j4Pz)8`"ȃ E#i$z %*Nb0{@BEJo,K4y"Uݟ.IH]Ye˻,IE]ut/nyi#Yu0F-Tl jL#!IJ" i) -{ :#TpˁC?(Tu(h2J zBѠ´xT}H*-$2`.Lu@Xr&渘p,+r`DqD)$>үA^[!RRc쉄QN7֝"9X?Yd\;%;I_9YkPsh[mP.1) (I+ZM E%-C*~la%=SGfV^U8\ Up}8굍M=A̭DʉD``\YPdVޥ|7y 0!9O#'c%1E"ȓ9{r|˨߷T 8 1HS\h&1*`̝W22B&{aM5𯗟d(Dօ9B Y#:6qL)A`FDkt,`jFðPs'=+"Nc:}U~A^b#{ TX U] ЇoV+ n3 GџA3?ޏ6/uwS8xWKR޻{ssO]_#`"E@]spB%㠂@-IAesq?)w&ua{Ji3Yt:?sO!](*dhCB޸T݂NbQ) rDt:m8-T>Žibj6$o" 1J FV0wn rI]ݎy=8T;˜դj ӄv2us1#wiv % B*y;7nVĿ)fY=UXl`Ǒf +n%aoZ)?#P$*jIq75v׃aE2 ;݇mKh8n^;912Y\ՄKSk$Q`UR(ù%d DY6A, 9h aW٘8UV!X1%~7 bVÄ;Xi# 1l=83 n:i5Ce4x A51^r PLk<aPgȍe̫Whpt*wy۽p2|gOjvzyO>:kFEK[G>'y &[F>s缡%m#Y3%-A^c"`"t)'"5~7<'|^y.Y&jio|vaq̘dU` Z),LD*_!s{ăЈC@P+tts(q G/|AfK0 pWLR5pQI%J&W՘  (ZL4!d"4(>I޻EYuƫLv Sh9250HZ#ۯ螒æk|f((sQ|}(ǘbXgn$b#3$Afs1$SH\+$PL sc٠$<4>(k<4G% FYgFb 8>Q;";jw8 Pg]-&o}nPyЎbڲE^J/{t;jqdôjsp<3(Ovܔp$X ΢r5+RKY5)k0PaRBRUV:@#y=5H&ɋ ?dC95y楾ݸ Hq8qa`MF[R(xIRr{L+bFUjJ`υX2/\۟]vo#.)O\nEpbg kfnts[s@iR;u8B]v:<λy,Nɍ$D~ 2 Uyit+l${5iys/I4Ä,Ԗ-on~wXViΌ`M/e1mݖp.ݖ^IxDXS;#0 _9m%d|'yAL 3T$ dWU!$i.ɹ5t{UxsÛޜ4 o0 H;J4*& '350hK4Y4n_PQ{פg 3? j>tϔ/$,i04i4VXFcu4[ rF*1wZ!PKDl uB쭰li.]HKo,Dp@; 3aKԕ[ƨ@,8( 7XdX9%mؿ|ld,i_,I|K֧6t" IqUzPQ#RsBk{qIiP- Pq=qɓ#,C fCDTA og/J(-p}*IY۷Y$FI{sq""6DtQpsuuw}fBfȫ%D9w^ {uJ7@lBx\ ELO1$F  #!\*"]DYa]{ӲMbζ1`I"A׋/WQY,ڱk`G$s$mjFᤍEyYQ_&0SIRzޤ~:KD~An'$Ar1(@z6 1;/2;:axBKv1gEyБoىNLb"W) '虲5~' Ź%`iU@DײزZ1G 5rXcCdd<@W5֑+\A N5,/䡅3_1RoN%{rOcxrSkG©:qBf&UGx%IF TMc hc@14Z*"2GF 9Bx5K$Z"9T]PQRu⍥H 94/DJ -+:u4i_SQ6Dc($ ވ7qsP'zGP . ;eHL:leGiՖSJbc5Gx[ñnz1IXmcPyq*LN1 28G3,rIHi9l QAQMC+_^]H@0V)Ig_Ӑ#N"+ր/=m-(r;_YR![jT:ig,ׇ>&ɷ%oh-^-"8<R<jj QBp8(j P@l8pV 'S9N޴Xv+ BF ÔDYY-"#DZ8IUeUHDFߒdtZ[ƶX.N" حϼ&8R#+]o)e)V IHEn6RV pQPZmg9* v`|1%X֑;qĀ֝`z)/k.W$0IKhAW9lдIթW|z5~:%)s+t5B&DtfO]7u`nਬq4V@G3cDf[8<|Ղg9Xd ^Ch֙[;Sh֕CÆJgOT+^xx_+~؎p -;s(oH!SFFu %^C h"A"yQ* ):ik)Fz3TUޤ*Z(0X󆋫ܦ_/Lmpٚ&)|.$6Jwsf4Y5`3P,fʑe$mX./^O&k Fk9l=G ƾ9UoOZ=s8'2$`r[ü?tQ2B7ɴa5lMc[9m+=s<~ֵpfTFK#^ץܫ3i^q:٫8+Oanh+o dɑnj6)V$'mݫzryxi>6׋bAOv0n6˶>,t(^8a׭r쓬|8ф3|y`ᲹܩINBNLO^(w{TkS 8 LV5el &=%(ɃU!tjmSY! F ;i ƚ_5#I|5 W7{o> 鱥j>:+f>+|>8+Â2bF޻tǬn~@0@11p"ks=T9 JK@'mfxz>(V4C:7zF”&ޞ9MM. tȁ\}kP'IbB,0=Rh5@ ΢ - vzQ vMF#,=еE[r$`5U5WoH'+y :iwP{뗍3t/.:dPPleWiPlLkkhTƁр1XڱE'fbVkAZG3k<3V!K+Ұ Y;kCd gsDm9g֍҉ѳEOtZZB%:̜<瓚Oj>|ǍJHSDW~NuՀϙVgg6!$4V:V<Ҏ19rS̈́A-,HkKRd#B`!zf*<e/YI-0rMuщJ)es.BmkSx­=@A~[P5$պ4dmk.P*0RC\jVxÙ5ځnz"pu5B&ꦫmw-]]`QnO߱D1fQ\VleՖ8nߌzvn&R^e./؆7d%;{6m {&%܄ jvc3c?{\OTTJYm]j|FӠS d:N۽x<ȵi_;$j;q;n -g Z5<$4×Us ÷o3Is~|Fu-؋w_Z G:&}-դ>_~}b*ή%Z/iordy'>kG0ڇ_y (ݤ[r] $xl!PӢhV6T%C VVJNjίnVKAYek)5(h*$t6{fUw!V3kz0*WR%F~eLM<w 3p &4[wAW6j$~Մ )]}X8.{UB7BʐS}zNW|=0b*NOOP-t0?0] l1wM~ 5XܥowgЏ;>O_h]ŷdʷ.{K `:+\ѓL0h֯C{ s,XzYz޽?Ԕz*a;=m8| ;*z-/=t9x;JI;h\iG.O2_XIRe/@]npY;\?jO'8 #_a,U5'폐 ҍNi洏"S$t@`:  ђqj:m7'cT͚F`0Pj Hiz{.?ĵ`7F3A?[.0H8|ƶEo惚MWs?IwGV%]p2忦*Q[}u>);sZ1ᅕZߙh~h(=xa@/`ʯ~U]E+&lB/-hNR(8c5K?<gL.8. Րǃ8풦Ga%ghN`lwUBf>ǪB 7Fb|-V$ NT>CـA||%b|+ JpC,v?s8#0g>gژ7ہ~GJYoIf!CByV+cLǧ#yy:r^?YoÀ^%(dDK߼.+gcK:Β8S(D`6B-笖cZidδC{iNU֍Vj:/>y΋ucuB2dZ \`ʒ1> FFc$&%ҦW}'N~0[;0J~_ۗ[uSncг|NԚ'UM@u:0P: )%< .$D:tӐSbMӏ{{3ouUYi=8dEzƉr]`Q[-l= x,*.B,(tUȫ96ʢV謕EZyr[( PVa@;KqZ ҥǝF+B%ZDyZ'aݑ,;GYkq bQp>xcd63Ή294.Q)}p@B,ҁbڣѐA9OjLCY?͢_v ErדK6gx~m[,b"zo W1~ Ԕ&`F  *ȄCS cz߈M֐FLhk\pTœ.[C!߃ ;&P;ы1 BfK=pwjuu@ 0/]z}j1>λ6Vt魵g`#Fk{2&4Qg q_W#3B30{~ݱfٔ4 YCg~P:g/BL;m,F?LT Uh󐑴2 /UIz1/p]zuY^dt Szc:#'D>.᣺/3^0/e-,W8z`(Vy>CM;YZoKڔ'um<OoHaEz㷖Y0NTkjPFaUsBvHMG5vzw"Ù@}}s}ے^^ttp-tY_h(D(._cf;RٿP:{EG)\tRpI/*)XVl^+h>h4^1CUZC@VS0*)eUIɝd1o{, R7N)ֺA;A9 ES:h ݪJezj{"=_RgeQEwH_1¿-??z5>Z}7_f>oZpZh'wmΨt aWf%pd!М?vVXK}uE|Rwe-v~}Y~3ۥtZQ@i/3`y~ְ"ϻU N35b4 "ֳ/Ѐ`g2M`T2{W )bv6 XsFq~3s9E, [$0N@Or+v7׶?F܉O^@ WAP,reIx6uWӾ)$XG2f#(D\ Nh@捅bHV.A&E:p":֬ڈh~ V/EjtjO$#,HV%9&Jg?Va$Q 'O*BshHg#( XoOSjzT,+ }GCִg;kp#NL.:1]Ŵ5,4H'T8c.2mxUr#j9!J܉s'?Y aұV~9@%Wsy9>hնt_`-i[r3[b[p($kju+#~;Nc闒뗻Xi "\OswXN Cs ?)1-kϙI&:42~x`% QX MܣΉ 8'( e*!:L~j)%.ʹnjT3 Vki[A'{P6DcqoOrEg'V`LzR5 "'Rˈ1CvPsYSPԛLؗNR-с3#`F0I`93ENv1ye*ho]܍b]&󣻋X|O c _܌r Hȯz!w!>.W?]^G-TChu}Gw{~oWWH k &m^AZumCO7޴x#4qt < a|sW z)$-'BZ9iRK$xhu+! g7L5B[ͧ*nN{:Brzaem!t#',ѯ$ Z>I C@|M0g֦X#jA@AK!|j'cMaT pPRj,ZMX+QhǏ~1s1vb[z{UEP6zKNAA! t;4,$#Fi1j.0QGZMX .E%K I X@ %)k4I!=yB]8'3vAK9iAy#瑶 Ok[ǕUY&9oࠢbrL2dekpt k J*2?#uDEدQ6< ]EUՑM13A*%l? Rk"WE ]]Un#q 텮 µ-oǤH"F Gt?Vqg"xz6pnoߋB1@~c+9}!kATn+4?rHF#&"Nm,ٓ ?ckL9- D~"NqMQ_m$6WYMWpꪅ !_3UlBA]tW̾r^&^݄߮埤bV6^JXc q=W鍻8´]@ NJ;LM6ipObN.i7狿EOK9rNߟ9xދӼA59[ I |Sfh/|!)BӶٔN4dXP!J!#Y c;U0g@hXLt/xTU|.鳺T@Fq6zqjAp 5ͪ o|RL3j^M͗#TOz2PK8.B"5^R(篦&h-}-ջ(wM4C4!lMe,C NϭWy'%r#kyڛah6lCy9IlV zR yu5-3s`gw&j1BΡ䞉)ꞝovqFJ>U2F-TUY:H/^̮ 4s6 xt , R x3b]k3,gPQRx8oS&i ȁE@F\\+EY]`+jsM{Q\NJi6KhIȲۧ83ڔ/ΉPF''b8nW@,jr ٝܦkyZ.~'=Zm"?Ôb|1QGHcN)9Q^OOkRJ2^'Fu T*c53^8Y8 Ψ9Ӥm9rdFg{N0(p5ty˔^I}c2E1R.D,*vξ^V_D {tk}pp<O#돛A>5a~~ܭ_ty[ al^;ҽ~Q7|oWW[ڬ#?\ٻ_.o.>#~s k}nwXtâ}ߡ/ÊSJpQ;IkCd:ZgLw񬬖^En2;Q N~ j)b#S(ʼnL1rYg `A=^D4! 'wmvX q.|h_*d%hK$XoވHhCfDF*|NVHK]oF=VdR(5Vt's-Zͼk-Z[\ΡץhY^ٳR"oN㖖1(VgM%`V@,Vvջ^0nq{ km]4` AgT>${Q-("x%9GV!آLlٵyT͜Ș Dwgb mb9Cy\gZ8B!%կ}b_H| ͋,jI^!U)qHJĢ3տGwU5xC3-uPT!uįv/4QTX C)ӖEihtУDf((es 5R)ҙhޏU);,h9w wS hC- G'KA=Ej:5r*Ey.m%+/xPd"L+eR\ [m!{HB?dp"yh1Sg-|gzFJa OV[&@a}gާvL)݌)Z;#'2.sň`Awn?)Ȭ!fT[j? 40&i?Qks&GDKFvfLJY ;a-Rl# m8Yh@c$ ;Wpm*d;JHT{qCdQRN_j_"7[[ ~FZh*V:OHŒM: (EH%T7*(NQu .ۉYaL#H ƟPĆa?DV%&SyooF2/_>ph)U+3L4r)@Pg'0r(#|əE$cbX[O+p?k80!;!.P f8X*g({N-FoTqC0,壶V(]sHtᨰ轠=Qr, $9gQ;Rb-I #T{o͛rQ&Ԏ)S)eh2*)eƪ7 h.FlS/`lGvt"C^z+n>F{=+.(/;vtN6BIs :+PSМ+ J:,jbљF"0^>rїb2}5~&4&wۻq>"7C|uys˅A `o#tA0t7˶TIIzz ^JDduuq(4Ckg)mh0Kk[˂˵tWO^S]#'8k˃naP)冂kSQ]QVuvG &iᓕq\*-?'bU`/-**V["T"'0̇ r80 #ln.8?'4Z\)sK dIy@n&>ƕYiъ|͸ɣ>-Q[I±jWlBa(e(-Q]-!0w:Dւ"$ewW@]Z ޤrbRL{nκ^\LIHǖ@l`< BP+ӛya p, jC)w!WD%&"0H!r$0oQ^Jt\) `aq= ]fy‰P18<|^ч!2qCf=*xeq*C;0xD2Xcw^40|LR HF xQTbc[┿f?:1 Lj(&# N2TyTZ&d`'Kb|ѝr0_j6R>9ÅP[]7^tݬ6Ϗ{<%F(|w?3Naoٸy<ϣ?.3|{g? %O=P21|ի%l7Nk ^xAdG5@ovf񎳆vƱ!_R=\SMaWi-bydw!t?7^6/A_#/9c!7ܽ]aW-ߗ_Sy l֟trg rNJw]&la:%ɗ+eqxϷ/S3Kl 7LȖ f4fk VT__G|?;فa0όas'Gd< G%S}sC,y/1=Ll?35f;|Ykph.#pJ|Q/$"SL.hpJ9BZ׳g) 4h|*Kza{8Lg^ _$.  `ߗyϕK27\tO֧*'OAy8A&ᄯc [af _?-}4 j%-݋?. kJchk;e8͇3RA5iCןh׊baX7V?ѮP]Zrg 'KH ;qƐ%Ng>YvS:Ɣ ycc^ ]}'g6 OCΌFy' 4< #1?3F Nֆqw?À2rLComY5*Պ\%#4h'cd퇑@ 鋳mOf2UQe{mckDQj( !7~=bYkКJjpZk0E,ѫiٙ]BaN9 b|#{]rfj)L%`Cr5dUgh7OW ?^,,2.;cU~xY|4TM + :L_iѭ>^BzQ/~+s$!"!SPHI" ;vwhwIМ {'31R=uY-LC]c{e4Da-rsN%G.QFwN%8-^1]Bw7P|`k㷯_Buq*lRIyS{WFP>#v3h'kE%3?OF|hl6p Cq\&tă>r -ukDqxW o؛ Un_2O809$tXW~!/] x:/Y~uπW՟wl~_\nnp|&6eca`ۃOW#ʐ98\ عc3]LΔ - M&1!{Noe"Zp[.{G 7PZ9@[<TzKE6TdPN 86 $[G7G)m}-Ɠ w)i0*q,ӴI7+'~0~=<޿O?lps}+>N7P30Xb~>H/;8MZe^qd gx~txdnݐD*? cU`4KCET ߼~8-FfAM .sWIn| j4 j\^jq4}NqS&iEDSC|>dž(λvj $;^0̱ob򨉾K1fQa!||T]?,MywnKe*Ajp ?ͷڑcofД ʢ#dS2HJ9M+`Ua\ {Gp~݋`6rZ۹qw 5AD}y9Y-g{gѷo6QRɣmubwqbtѓ-V/vE)F^4`;4q\Oڱp> 7BpC\bY\g3WpΡ9rK0T+sfHNA(גp+!:,5+\J  (q=" C !X$`"T!'ǥG 0aJ[MPH#%Zb9-Ė"etkx\*, ۯ%`7_ߋU[uXS9Ae!}X!&yE#RٺS gyղY/ԫ5JȆ̱hWx ]ntLF+2 TFBSӷ|yF*OXM=5WmTo .P 5"A3WhP=hm,`8\Cz1U}>ҟ-St%©Zp!SN0Nfst1 ZƉ $ ŮrQoQ̼yRZ Q!p _(-%@g2OqqPDm\撽hj;)Am/ l*賚x:ZA$:T$ƽiLr~=7Y@.ɖK2lp#A[2FvuxQxtVK:}n>ՈHu8FDk'NF O!ޝ+HE Puz8UьڝƇ޷xxKf`{8k<:NC] }9IVBd1 ͓e+99NDIՆ{@!D;Ay1'"_\)N^}=}Ҍ?oSe?,h+vJ:k_E+M}\RA"O=))Rҫ7ۙ )F_ 8fєr "eדޛ4F}.^+g1g5* W[!!z'Jܬ ~kUTpDV*ۅ37@ϝe|L._!#C҂џp`^ `0cEڙCyM٨KѬ4&^Q@! qg3x DYMZU%Vke:y/ʥri2!Ҷri g6׏9(P:*byʶ(Ah7ea#Ǻ]w;Ds!mhehuZ+v`&cMChYݥkw5ZS@ݎ(m$mUT™+|g%v^+k/x4\;h(وoZr%ik>䣆rP f qYɒ ଗ #Np88$Ƣ 0>(TK"JQ7_8J&ǑRkld!hjco⟈i%$>ζSP7([z.3hg=gdC+c92J찖܆_UZ!eUDeʡ$=<|]ߐyY"IۇǺ=H(57q݌c-]#/F/x'.M8_w?Oќ8@{6 6)haI$e\Ï2Z@[xk`/+ZR8T\AEKJI}iUMw::zU_>%VR=hܔJº+`ohZA* *V1mK=mRV]ӛ9c*%1??~p+/n{6oݪkH rygllE;g*vQ@:bw{eIZ8tGjA" W;'WH΁"HRR+"QZO rI~X3OrW`$hi8B`pib! mbpdb\P[uUോ=j1юJbc#'>p,h mQ]m<^JGٞȋ`oxmkyf7BfF[=sALicn$Z9=$AwX?&&EIO5'R4C*T‰_9bqx'S8bo>0'Ảb7D^Q;`rDH)I qY+f5ǜ34,r `"``kA:~냭Hr~.w4WŁ]e^H9B<~:Tʎ#^X c.v-N%zuUN+;)b7Z= .ħ!R`Qݪx<+F PfhyWxD;/Q?ishK&jN53d<^DL Lm/@B5:BJONH<Ԃ{ᒞZXX "}Nn9`وS<%UǫO"4w7WGԕW 4 Ttfz洿3l˱D{w}$b P gE 9k3W;25~sz5+3G2Km(U7#[y #cΑjO޴j4{$ũoXΚ-*3I1箄VvYddvu b+H##/Q:7Y?16 j IG^LipVbM┱*\Ğgg(ԾǠEzV$^cLf8~]I¢V(7;8,yJLxtt$ u17M}\Tq)!ٷ}DJYu׊KFNwu#WcWTPayJ `ZS9T+l)(dk2ȴ|& [ Hf0o:*Ek7aDvx(jj.kzKI㋩i fpB9)0&Ru+kxPᩬvEu9[8"I[9?o%WpZngaFP3s%GsOcORHE •WB^߲wfyFsF]k ]:1(Y?1FP׎^P(ٓ q%SIȫ(N緕yDnF~)~6oݦu[Z~ a/@0$PIFT*˕R9NTE izKI ݠo&qi@&6PɀPH-q $JGGj, " 󱴡XCZ`/1IbI=8o㮯'٣ϩ$gAdII,X#l3 )0B2qGh3woD"tі:99(.E(cx(\E6es,U^EXeNc[{>DgpMѠ$R$.aVzm0hv[J)*T1u6Z`rUKMify>JPof!cr_ȮWJ_J6WF}e4"fg~}=Y Fуm 635J9$0/\%P|b' qB87}}>(?VkUk)51 ).X!AWr'7&1Pq720=E%S_7SUBa.$<. NBR  \%: XVqqߏ3 5DFl1&(ڲQ 8򒃏I4x<#V}PNJ/ |YJ+kr۸ŗWUC$\\"y\,К!%= r86(Cek;kqQI"w `SRQlZ'l{! h~4,&&;?q,1i'@4S`=Xt):EWj_w fXń´&[fUJMcJ)dQ AjK|'Xl贄?,)k2  fKSˈwFi Za4RpW tx22L qHՇA*aAX#b(I69Clt4AݤyaN\?y]k=\Cy `^+cXg-kRxoi-RPQ-+jk^c!ϽjżeӢ P)R3 Q?t xx=qֆr)Nuv.{홼 ,] ~n?.?o#CÏWx{33ꈘ4|t;d*JPD( JGP 3sM# ]AdE ̂2I\R»W$wˇl#L/kpRRFD)Bp(4"XU͏}M|sduauzB2Sbn'n'4J1H1(-B[DlԀJ9Y!%^51EZwOAthቝiҲH%=~3kmـ]d)rJVGQ3`i2\m}CX܌wfFLߟZ?7_. FÏi^\C8KӀ,Xw6˅mb ޟH7Fq+PCKRC\BI*%'l1vM|18IVO.8Tt#$'/^m8~~!k_^|7<<f]1va؅b cѫ\ލ^l; y 9̏4'`#tp1W~5(q g s*8.⼦`17Ε$6 oν$63'3ɂl yP-z IK`YdinnwMfޡp+܅5lw3 `Jicd=xAn^?궙N]ւ/u^Ic*D54lkA77za.ֺKI0,;cbk4a5Ģ@o݇_{k=z fA,[#gcF#Lнf^?xjнF Ds轆Y4DZօCc˒sAV=,&KW*ޠ`K$5]6a=zc 灅<]!kO6Z}W.Vw~;fEVtw =LW`m篓O;"DSDkOqLi( +۽~7^ہyx.W }ik([WaQ涯jw#i|Nw[Wns\pZ0mn\adzdЦti[>OExh Sm:JeSs\'DtQ NZho?m^k7oxXS?X_^n$/UFjZ< mlhȌV ~38<^st_ P CiS$AyJrE2BQ{Ӝ3\$V/;_$Rd媔ګJy*k[kLp^rN? =Nڨb6*ɏS='j^"Af|?8ϵHik(h P@^>NㅧDͰDVC)V9*ooZj7O.qb϶jngEB[L5+~/dBVGM^6DVrNJjpH^ Ndׯ!G]$'lr˴ҙ^˴KC2-5isoo(78M 7i*J'34lS `(Jr zh3T i6H FN5j# T'C1!UмDHUR Hzk?!C_#Մ+6f'f-U&fx|\z%kqI Mq3 L4] X N1) kB҇ǿ kx<z{_{;M^z|I>)Y>K}7#7^ x]d˵ҼywRF Cxvpgh< l$+DA1bg8XTU 5 fk*0PnYs<3pJj*8GˏИ܍)djLZr!)W,%Y)oZ {bDK!Èk5 Vsa4"Tc)8$` !\C{H(R>B((I081 +C+aM@8, <.8ID#bcYP wkʣ$ؐKY_um3fdl_tUI(TSCJ 8WQ"$lz@$k++#Ԍ)G %)(PdH1@lJ TJ iX=93:I%Ǭ1W0zZ>[{T@2I N! 7*d*=ar U')H~>cZBRY6c%'daQ;V1$;8i8m 8*jz춏]NPw鏂nW0b u>]( F"8 &39d`6K2d ?kMy$d|-dpLJa7ʞ_ICO.z$TdDҔ}%I%Dڤ) +H&Y-/`T(XN͝-؏ "O 8=pX+VQBVh$F' Q?}z+J=f8*dOI^t*l 8Vms֏B>EfaY(r:&P]0m-@MS`N,Pv$A6=}#!etqY%/4$OBeDUQd!/.A=;Oyw0f!jVcϚ#/I?/E2p5M>ujHߝo~nmV>owt5@43 EMqF)<1^6l2_Amd]d!* FYuH\mr`L\j]<ץaY6S5<}z{^f8lGWo7 7?Eb~JaHV~)[=-f ֕~qfdQ;j5J'TOU^jI-]Yleӭwv{fN=N?>lFv88rTQbt4cYkƦﵒ/I--if52x];qWy: ry}J[T8f^(^Nz͸]ȵƌjktW;՛.cu?gݘI:RfEZRÛV oY/T,hF) _~{yv?}lEo/zT)Kt9q~@ bT uCm-'ak4mYW2_AҎ%͗KLu՟|~]nxv Km 4\ W/Jp.j]| ܲk`-wܢP 2PDCe|byʩ+ޱ[goډo(m8k[qLNcFI7L:GʍT?)[4ekFY%rG u=gj/ÐXp @Fw,*_wv_W$7|q^z+Cڗ nj jgsk,Dg[u9V؅[ek%f\3M5J'fqze-KOyjه)_Ex[mn9F?z.EJs+5,a6;)/䤃TqG`zGl{ol0>I CUU=E6~Ӆ1M `߂?2-fZCd "Uk+]0<+br$IYr iᴗV$%0KsT>S$OY;Ԙ<(ڬP=䶗Mˁy $׉ta&Cn)GR=n^JYgP at˦o^X(0j g ݓ(̋ VIB13|kk#2!yٖ$Zed9 v5l\Qh,g>un^\s"krt!8ϲQ ˇ 2&PɖѿR8A΂ڼ^^q1 B(8L4}qP,W^} X," 1edF(Gisr`u eg K)q.nnP=[|r_J7w[];ކH uR76ֱ^4>zLQ?x2>)L׏쯟ߟ޸.|̇[>y}W\۝޿{}nOypiriyzq2\bm㑳O?TMc:ċ'vkg .o\U5 ezu-yy];WXl4d_bͭ@)iKIiơW.B|D=`ȳ>N9s V-MH ݱrVJ@D0"{˫nmvi|;048l58#ߺ,qQE6UJ%PHPx9#K-Z[MRF$NB 5fհcKkMiOgoH:xo)I[=#T" 0Rz(zre4J&cR;ŭjx1ɀ=[|o/ ¿d;sX iwyͅ^;q!bܪl(֤5N}mމ: viMI$t^G%  1dYDΘxॵ*zkђ%~BS /wyCp;`/[ 5mM8S{[ r澪ՅN98 j sp7C|]x\k}qoK$}_t4Pe|bx Lw&'Sɇ(rщ5֫ ^NULu_:.jݑzZ>#HM03"CIΠ`"=b̞=-ӥ%Y)zۢ6Nr@˛ۢkUm19+#fEȢ Qb`uĝ2SkfF`d[#0eH(5.bێgy*lL"TS;DIPvZRX1``BXPهmq1ifCbjTa!u9Bڸ"'- vv6B,]NfȌu̜#ņbf2qxFCzfvɘ(?Q0W)ac5 /-F5mAy8:U|\S5U|\STq;.lR2lO5%m#f3pTl C͘Qm9^qvߗw?ݕHRmT.܍ lt}7&tF>y -%0^`&tA}SBeM^btF(A"r}(ȶZPo.Z±lL9\c$n01厽&u4BɠSp]a,0u0VClOw ©X_ZZY 1 XyqĎkRɜ8 jYU?on#yPeд׺̳*5OCJ#x9EGM ^NzM" I?l;R9¶X߽ywp!O.N@W/ƖFRڱEYdDOoRPjY/BI-JS90)$D6) lm!(>g Yu$P-2RR f? A>¶ZXjbzX,v:z(c)h) -__JHlz^E|&twYz+pw+:z;eڠRnsďgHVl8)ډ,OᢓQs{ 6V5^fr{huʢ/!=V[O86`hU7]sWFC2>t2 XtQe)gk A4&/B<gZ8BaoCt-MF>\rB?%:(9!) )Rb*Ang*B){U/F s-%̃! KeRMK'I1}ؙ% RBg k]XE.ҍG Xxi1C\{l6bɥE ](J_!Dg¦6_GTzqfnαe|6Qjcl`AVM!‰XX geW{yk^{^[[m<.࠼a#eD#G̽1D;d|oURb:v3Njڰ"MZ`c9r1pԚ{M`s^[a1ԙH-KIK[;V9)sX/u[RwAEVa- AZPβFrIQH`0å*%}<pRhw(A79btSm7-s |Z2-!_ '1[SVly Exnwc:F@%nN$!DJTS!}0MR5jû,LpB$KI-1LV,ͻoכ{~4^/>w1!$Bm {Sfۯ\xLw4+QAȖ];XsyO~1LxꌓNrGaq>=:M`#8xƛp;tὙ]Gc=i[M[e!n o ́,1s[qjQh Q7*A8$+QSSM}6(/4hmF]'W,~G>,"OK2M7FŜ$_{#s+SҴm~M {4^3 To?sKYM,r-$ Gߞ#D+̄MۙeuqG *BH6E6$ 8eN^&l(DOEEb`^}HYNV)&WhunLgjgYvk?xª͡ƻTB"jª6wH=3ՃtĿ~oʉK(%Ơ, A3%E%3 [D Sr}AߜYlbh󴥽0EI o}/T+(~x9B#]ޒoV/fEOyxY}劇#} ēl)hx};JQwuwq*dFf:@PXܼ&S /i~} JlmNp7y;>9( lwv=ʍ3Ix*?tǡ^ .m9dv'Ifz1|sJ (\oZdL,4@yAX.5Ț#~ A?bi cߎ FSb[Ţ6fi8;YE#Bh{|\T+ʨ}O%8o勞PZ*):;z%?|XJ ԑ h:F.,QA֋&rx8 ޳Dž+'uWraP[y7n2)#[6l`\FJ#,P PRE Pœ/(KJ.v3Xњp4J^r ߭G[evG"In?Yezsu?Y< l_xo'Iz~Z .mp/=r:w#IiƵ1LI<[E _^]yjPFw0kpOX,j!7胔pxdXAO Bn22j5r@=q@--:.*Q X wo3c5Hr49 džD` ˙#eCQb8}ZY4/Gϴf`N6k&'Xs\f^FwRW'k^UsW?ShVG"X6\oo`z8t:9[DA~! >7ŵ&]нRj_!~*U|ayjk>xP+B~}gfmCXH'4~6,) `(-\{#eNg RRIP^NS Y[y@7pWNNO[q'TW /QGl^TƳT䭝}&e>%UW&O7RŘnrͧKW @pw<9˘m̏=9Zv9`/}8AHҖȮmqZ݋ۍZI ]Y/aZ/ jxԋDtӧ{T݋ӅB4,eIś6/ɱ.3^3@̤o&r({ dYBv%M@[yI:_[Ki5܄PD:dV G%(7k>/~P!^ՀםȬV5^;h&t@*~W1_\➺d=R(Ng^uQpZَ`h;'T&H6>4 _nxL ઑ 9UpGw㈱61EϨ}+=ÇZQ+'<֒ܭwk/rkNn r"%(L2a$0 Ԇ:H XF%l\sqDbg6QVsn  )hَٲUV|͙b 2+M6@$_ $LnWb#>3hֻuݍfk- u&zRHQs}7 FkolT g0J? ~$T†X&8lE=G=s]r.ߘ#SfByya6 p -Ţ1SJN4?̂y3rqt3ZW,mA-Y?6eFT:gGZ\ݹx$Δ( AeC[5L[Ê6>S8P#վy]|lv JRrN+t`{p"$s!rX'!ZB Fㄩ[H9gfM{6B*מxf]Ay>-m=;K҆(T=cwVZC D\)]:#i yTXmNnp YU"8HrꡄmmiblcdC v; uE %Toz0|a)N=9` 3ˁ yk$)!600D`DU`=)x:6d#4EZWi1bnCen9R\0fY".e껜m-puHu4"YK(LJ3Lh`OC.v+'74 A,I/͒DV2R5UWчhYj1ⵏ_6!m]4on]se8/y}+HY)idH'3}p9R>p d?lutt i`̢2}&+bYSє*W Vz'ybS;qlS̘y,i 3)K&sSWk 9,>x{| N֍Oܻ9AÛ== j1SGxeҲLzoTU>L'99ޘeTJxMa&F Rl`!p"Js5_OWzک}ΆBcŠga4w DeV6 #R&=r! al1KE3PJF%<ȄHx$Q`-ĠLF7 P][o[G+^fG{w;0$;/js, E+qCZDJ}xn EŮUE)qL2MQJZ D0TG]MAx\HMg< 2hYԂi8p=8m/df™hL)EdFwrnVRfr^xu߹9ΝNHÏC10 Ѫg3? ?8;'r&{~CSlUy u"vIiMOJWݺPԣ)|ksso=K5xt7w"\i\^w \!?yAC]VBr(nQVS\p>~[Q,Hwq PNozk<32d((-u0l$KaduBcwn^iN:tDunTMT}û -^ lP^1 P*L #2+6##صGLB1'yAi*k@}n Fۡ}bu4T HzX|gDh>U=u#fiBO|F5[5R\RMi5Q%FHQu LR,K+BrD*iH )$Ճުve+g|i1|f!c06y87ib4UP n2Nt[ rj ^Bfs盫,wq: sHB"r]ٻh'8>.k 5ɷ]4:m콗zЁ+tXUZ+~ %FP=Fv Gk@9Թ]W"`` ' RIWT_ Nw~ +X44lP$b)K[t$w* 9T_%mGQӈ&@cZm+$4d%6.ڗiܐ6*,~\x;RbhV<1*RB!*=`D2ER.pJ5a,]QCF붼Z0SWW'l/B倒 TTܫ`9USⶫ4,g|T58ǽeO^ʔav< sޜ{ZYCHT=X.9w /6.eI# Fz^-o2Ufl}g,DÍ^zW>9C7 _ZāOpeYk<_&0<-ϊ mt;Dn]#m &m=#Ĉ1rTJй1q0ksnF '$K؟1%*-_`QD,OBȆKhyJD}$_jk%TFMY&zy;<>z\gMA/ګxT\3ݬ7Zok[qQra9?%S'ߣ$-x7_֢21TEhe!Fqf"}?fιğ0fagg5Ų}^|Or(,KM'Ŏ>dGÆYҸC38Jp;Rf7J71q8moݕLr~'7z|Qt_s,2J^Zܛ&D\ZHr#[k$%CwzC4SR#T!&G>B`H3>%9u` J' ңXM7tJRyЕV'ӌЁ,YD T;˓b4u~nda!sW#+voBPկo~RDDI;?'46nO~ܦ567E;O][TO=%*&.o.7/ȿw:|l;ɫzYrx i0a$cx=ՒhfNJ&|45Edkq߮SMg7˵ZG:n<0פѰ/4wZE)_O|-l6|LωXR%T;^Vx0)$V=VE5|RZxF F|l,py<@6׷҆wXEBԕ0TFXilG'vXo,Mux;A2;vHelv<\`6?şЎ5&Gր.ru<_GuG5;vZT;~Ԩ-F;ۃTf#ֶ2;TZm@&W/g6^-6o9{5G_Ͼx_ײjh[&"))A (!BF4$h*ai8}SFiS}ܽy.}#(TTO3&,Hc Q)A#%Dx4A:J\$!ӑ9H9rFݪ,63g0qqzBB\1@7.-ИeVZc%3; /5Vԛ5`s !#twADwˊ{) .ݓR`4-4 6$ $i,O{vR~Zc+LIeqt'h>Aq/МؠgM{yt 3uoiŎ]$FLSR6̙,2Gl ԤF Y лO1Щj x18js{3|L)5i)EGAl2Y gjHS}%aT2y"p>W lldhsTPhy?? NfYee`1G2)}qk/.ŧ͋{{W@@J$%QjϹVꆳ,Uxzȟ‹ח\gc 9Yn PJɰghHR O!XiCmz1 ,<Ĕ#I=A0J/!QJsaQޫW'q4WI E $u iT rВ4:4I Z>I$3% @ZRN ψ*I%:rdWKm++|gTڣ#>SnSKl):ג4Fąu?Li%)鱿*SoBJE ].FFbD'TB94,2*(Xej z[&@fKM%hl6%9bm"a/'C4aTP>WdDȁ)9>b7h- \|IeW:>T5'&go> UcVeEݜ\gٖRɦՀ p]xO/! $} ~1{">DPY "aDji:ya[NpAid0GV`$Z,P"pFᜳ }73A ]},VCrl&6ywL#aW:^)d/+G#6!Y,Hqݢh`!ˬ̬LDUc]*OH. OV1XΓrցC`*o6Qr~ڋVYoU~l%0ookn}'E2+~~k"p7y?ޯ_x4_,v5y V"W6pxwEЀ 7&Ā*AP`4=oj2F)DƀÑ X t/ qslΐ1X G(`"(_Yc'lb,k_AXQQ]w ՙ&_m*.L0!l^!PA"Šj$%PL(s4 ZJh"rMa-Zc^ZmS,)UuAܳ8MBNBgόD3}(tz@%kEy,tp "D $$OB)Z^6əⰃW:۟l].Xa`2f e(g e,%G" kMJ TFy(S%SeD 3`Q-q%%nlΘ62묊ā*KPяhj-+װ*#{-Zi2cy 2  J<[B6gLzcB#G _QLIT@Q&׀T)^ϩΒpR>'>F$$Ct# @c$6t.ʊڼ!9QO@(}iD`M h"XH@!2'q OegaG:$1m@+J|&3ئ^7Z3'CIW6^/e0[ 1&$EN{x`g~%8%F(f'EWn .oFpJ`@Et{_ph7޻ )>Lb4^͍7]-gm>셹p8r3@74ЂCƋF[3KĄ`xs.R?4z[Qww%FdqRo{vBQMYMOdӉ[5Ub5lŇ ؚc2-Jf['E*&jJ!'͇#VxUrk"` |_jT!0O$0{22`3&AR%Z@g'n3 ̤D.HHD9Ϧ\S \]fBI~&&vL Җ.Ŋ߮HKt8̍ǭ6ڷ~ܒ0]z 7a)2Yr HbuSA=2ў?vv!6(e5YwwOAxr62gz&~Zm:_gː|+PXh>$ΞC>DӘsLMaO33h4XhBҧ,"ҮBG5όWWОع#*v8[ MΉX,XT.bwS\1Q!qh$T@ŸGQLi1>FΕYRe֪/?[B5\k.%d_ceSX=z&Gl?lO3gJ  /FFb\J0e(7B"8n!1u-_ Hn~zv whU gQH=F#hPeJ @Ʀh1+_Go`tp.A!L۲G0rGY0aJċZ)lWj#%X2&dQM{ ($  T.[&vK Ndǜ2G0 Gw?l~F@\ha K]r =]4MoH=p[BV@h 8 _ط|ԟ MtoyޒQdr&TNd!3Q0Nbssz$ъ'6+P o \Inľ{.Otg{ uU>Zݶ ?QA8f飷{6 :`h|֊}qtA-BpGIbcV>T9=8 . ;Rn,F\,;abM/>k?N0rƴI1Ik`W6 S鉷)28& ELez?wu>nDpVѩ2Tִ[ :GV|"ZE$8Eݰݪb":UQF]ҋ;Mk[LIG)YX 췊9kDULv]7kj~=}џ!/Pf8sqB=AN/XSP)TAy<3H.k#롟hwc@Im[_gY%ύZmw3ZVC?/1^_jѨq T53%;b#?k[e:d0E84-4ZוSFKg _l^* (Nkzֶvg$_K\8>6=7,m?4c9K(E.ϗ5rI5㪂 e!\1ڬ6O|@nX_l5k%m5t yI,u;W7)k0tޔs| -UaÈLz]Zd/y9Ы7ߜO]2Fw[7*mǨ$>jk|FT8)h`oLv&6e;קۼ㓢)VOx&5@+yz,-^m*u~)hJGaRV v!sf?y!ltN6콹pdi}d} Ds.ʿB3Nb̚sH&vLxz;Hz;Km͔%k\r0pBl 7 $531FҨ2\-Eњbqӻ]=Y~ymESx)!tE\|:6/nCB9TGLeO|2NsJw$B&nho7_UDnn{u׈dbk!bv~S17)m̊s\ݙh6{mFtDCb^r B.Jk!U}\4ͤ$86  Fb;!ZV[Bؤ2'5|7]Bh( ]`+t M-Ң}^IRjoP3ԅ`˅q):=k&¯;+{=A[v[=7nzMvx. xsjMZ|=.gmcŠ\ 6&tY7%˖TcVH.W1enu5Gr\cBq;RfpY=NO3dg~8U3O۴KԞoooq!uc?1%!F(8Xq# ?qDJIAqt]'Ʊbtϼ|m%tP`:?Gpꡕ Z8N^ 8 A)JDD{4YY,A~<ܜDG>xOyPNLUHr^B]+_͍7]-gm\nCFI/>n9[3oK{m!?7jҙ@q /ŮGy,/$J<ӳENdǎ)SnNd>X5#Lp3{1;bxeaW^kX{X4SQ+k9L,iqH2Be43t߷;[^/' aToPP=ވR8 K,EQ(RpN:iOqf5:RQYrSLcH6cx)̧,9fo3],pV{'Ƿ_Z)kȻ׆QD諙+D"aHN_| ~  и3$`F8O_Ec4"#1I:_|=~7#AkݜIN qXl+|xv5 ^+ݜ1-b ns]mj*8cjk6Y%4Jt[,6կzbNhg!+9e3L0C:^<ܻbS#d\%%i%I9n*wڝC=ͫ+pGMh^9AL5I iO\zy«>ZvN%U-d׬az/8tdpaHR/ Agx gC`#$̫cbMtꭁߔ O&A24{uȜبGR.DFX|sʩ!Ki18ͩ䊩!I s_MF1, 4g]Blĺ㪝9İzUIМax$-QR됓Ha Y* .@.KYpT+AA͑~v;cR Lq0!˩4e !^pQr2{GzGk2`zhk:X(J:oX-q)Y(GoL?XY@G )29p $!Ўp* {:iJv"(•$Zpu0Єvgkv/Nw5+aAT[NhGBbP;1݉ E5P{#R"_6Zzn(xM :%nwxrQE|AXNw=ܓWȰ#s-ډ WC x$ of+;wX=!V5JRQsD0N]0hj\Ϛ3˳,K&(y nq6GkԐ\پ.v# oPjh])Rt;A6hn;) hKr OrdX¸'Aq*0N'ek}$++VZOM3[iu=/0Ѫ^AX飇~zN5m}/nDuӯ,k1ק@!Lm*B^@O9nW;ї`p.gS3{wCWxqc(h:㺌|1T\vf[m3'7WWra^-aJ*S*l1}<$z6Eƍa5(,,Ykl0 Y:1`\:{;[jqo 7+љgV ܨJo}W O:([flNfZR)%F own2^C+"JP*tvXn9t")f׌ʵ>zqZWN-Vާp,iS ҉*fI1G!+e:?LG7\9sX =ks4Ͱ߹'s X}3dZ,0V!tU<$-X,E(-Jb)];`6;X+AF-JJaUs4ՒqTG'O5@N]\W+ nBr< u%=N#b+o`U)f٣?|VUfX_^gh-Gޖ:1b()zS7/9_ 9ŕu>쉲Qi)6 b9a@n뭤QBf9{ʩ[@Y44C)3`;}tcqB˃GvB݋[nׁҭOѨ9E/g dQGm[ (ݚ!_ݚSWOP9R-1YÉ˵81j yάK3<@͹\4ЕLi^1bm˰6 byќp0kBZY /(JVIr9\Fr yr6gFDpa VWVZ-0PI@l1- S0I/M@`P%aHrsx5é(cW3l<ߞl!`ܭl"CbPsRjgD\Cf"c[f;Gb^y' gT.ܵF n WA/ n]PLk^z8<ʼxswPy DWv-;/_MPږzn>KUT.m:Շ& UcOg+|=I,.x#*9ݷlj_pPj@"\T3n;5m,P+󸼹OѓE:-@z-5-ruX%ys7F;;Eȯ̯4p($DA쀥RTl_ wCx bj5 >; l:8 M!Dȓ$n$w4ݔ=@t?$Z9F;DX|]YZ~8W$/1e69_U>&ͦSQ*DZ&*5l 36wE+g-9ULݏ.}rnLuDb5J2ò,"ydOk'AvV1ZttKdϴU ߑgq? jfz͑VE- ;j)TnWy*M!xgSp?jƛc}`W1^.)˙Lz nx|suc!,5S8 qFsl[}ZVidS:ɲZsG}1-_w=\3`GtNͳS\YKM`$E*O*AT/;[^/'2vM2'4Bj(bHi^*/HaB\>MټbenRZIFUL14SRijd38#fG#د+Ɇx2񻥗*3NQzDJ=96ǬTBwn󘍚6]rӋZP&P=&{<}J쐙|^hWӜ(̭K2Q8KNRL0lmǸe< xg)Hy`}OùLScIBA$,O-HT%YkHm*t\gk Zh{V 4S$O'.y,rY.Edv7## RSyh8Ǐr-EɅ61{V{ZpcvE2'm}=+4B%-*V-.I$C-Q(:5j ^M@Iwx0rw2tH}W(1J\MTƓPPh6Hց 2!-!ӐдH釖Ш6|`hD~w9Ყ z wɟnxl6} j9BɎEg-bǣnY@KˇQ ow¥ߧWR* x#0$F&!0g9A.6 Vic`ܗt5#֨mry/"P@!8Vvmlzǜ4|9~7^:jp3\<Qy)fH=)-{a$"p VSf@Q*:_Az,И7yA! 5 &-w{Ŭ$ts&m.X˝ˑ+: \ܿwyFD;}$Ktnaca32 gd7 *K$K&S N:f.w!Sr2l tZ hHEdBN{xD4ϗ\7, _>$環:5~:Zkm6-pBpਭ4s.K=@ܟSjB?_/Hm4TpCI= uQ՛КAWeOn<Wa3ܴcߓNhYDm( X%WY [ D37 '0yIXRDb$ÚU|}lS.hji,_EG`0𫺬L;ͧAB\NxF ((Oiic&nuovn|ٍ7ƛ0422%#iQ(U"Ǭ*'RqQaǏ77wٴH9Mb;z3( bޖ eӢssT6$zD>.TƬ䪃neR\!EP9)uʩ- "rҌ!m32k*a! !_$S WOvdmaAd6"̗ I;Y짝IؽMl`lr"˳; _QVKbwK1cd=U*X^D∣fz{qG%H hKF∷NkTH2Z7P9*AHW$w"0`^Er!+CTJIVpɜK4sל٢ȕQU\(R4RgO>"~&p$&(B)Cr{ - @xx \"Fc5ZTqɄLRt_13Yc esE4Z<Πĥm*yCJ5kݘnn[fJۖKm95i~S;׀| p-5k1AatK/`N Q_ 4 IM@ X. ٠$e#P5Mra8C_AeՊ24_;$U29oƇNÄhn<*eDH e9\3D8ȊAfT}ZJOni917.\ЕBZ˺r:z\S#"JMDjq7 fю ){;$TEP)7 3)AJZ 4հI}Ǎ͖Q MSl'ۍDH_qxݛLf"kt)$ઍJ6XgQ)ǩUs3kpN] @7BOU,)z^:=xz1P7X(x6W(:ҴX`4Tɤ$´]nx%cC?)o5PKp2O['(VY6--ke<뇛`f_NN,p:..ӟʦ0]P=ޏ*VMyDOGǮ0ပ|\~J"~gq}W3萌n,cWE;6Jz7㵷?Wn_3ʞE YoZj0Q[x#2t=]Ro-҆x g]WQ^ -'?RDE'>C: 2+Վ7R(Bz.:6)G#[ bsïHԻR{ ֨ t#rnoF7*N=!ͿOeఝmpw3+dvP%49؁PsہPKB]P7aG-1xE2R}o?Ywf͏7dӂV ؽgͨ a@C8gY3]RVǿoGY5>V]1kA>44wDG`<h-8JgG-Swgҝ->KrB%wJakvNebN]&d=-16K99ODi0tK0p-c\:HyRjAwX}1RОMCԐcX7qlvU-lρ]}lpKVqR8]VcK,O =%ۺ7<:0,oAM7|\rp6O\dTphMNvy( hiXg۴Y3jG,C12XHtPtRo`gg|xE@9.& %P ʁzu>iKfG.&L*Q{GȓLuR/2#}<^A ;xWJ_habBNI.)]HW.˔ ء((LwU-'*3'*' V.L@P**F+*:""K8 'Kt7g*1g>6IR=J !3ܺM'+9܅y_U۳K XDp[?ijw?J_~_Au 6V~(4X-b"`$l`ICRkxuzBflJ?K \ \xfœ#BV</xr.?Fߦ=!XDN~[εƛ{wne\͛i 5.t?t%T ivUt0>By4ʩ1тE0&8z0? [8z I5yK5Y[j7fOSHfgF|W]O|{ɑj~@Qnf?mNC=A'e;!9et왿E߸||JP눢L%LAnd*kaL3 Mb*Uyv'h=IEQ.oҗ~woD<+"NWaI& -YwZʍ㯏}J%\`Ŋ?rԊ{A:=JXȥPOlϹϿƂyK& |6iـu\%`V112/7_J0pԍWx k5 e2rOb1RP+u_!;%UO+'8Ui߬+OʳW)Ր+VNٸH)!"{GG>OHsY2$ ZsA|Zc{7hL<)U:'bgbY?>=͗D ӯc0Ec14B)4B~zy;VR)=J4phRy\".zBxN͉U:"O0g{Ԑlnl%'}T:Ѳu --fٖ*|o T MJKٺµ[Jm֮VWTY05_oHPR!F@" l_x4 '3xy 5GYaWSBkknFM͒­'8I)[*\mu E%vo㐒xptx(QJl74ubuDG w ErܳSN˗"rqӷxFT#4b"4*HXYqrfxȹb)>#18/T`I&fWD Zz=Ò[$O NuI`UYI'^p"b]'`].Cw ٓwюfƚ-G4gq26Ur&މ2;ʰG~^[#Do\{KD:NrӻV2řS9~>D'bA~}^wU-rAbDCB*08A$93@*\@K6 }>j6%!2Zswu,  Cn̆ި|(;q0z5ܪԌt v8]%voP7j_Xy˨7[c9Suߍ/[׳Dzj6cocf1> PN}[K - Qs)ƶ[h1bA .Q)@>+Ѽ hnaQ~8-e$$ PIc(9FsBy)o/T Yt (qwpn!֢'-6>*X::_YclxKoa;(pˬo. vU{",ǾxZDZW"eo!9Z2bozZ%7c_ļRrG!3d+Hm9D'g~ Xm 怶[10 ZԪ}Vn}"`% `z. o;Җk\~!5EX Pr25DQERStqEsxwQGr@X8`5ߣs4M䴧*q`(M_hRBTfcu[FM;D/<ίM+ɾj)Ih~Vvl,-RCh3ܪw '+!U&U a^o$]\ʄ>n:巒=iA/Ԟ#r  -DKv]\Vz=J.\xړmȶCmvNOBkAb WB%WBh8]Lյ!4~Bq| ،TH>u[O'x:)ɼ4s̖ ^1Ź/F8$`4 ^k(m2=ǺSϱ/4#ra-rZ#`LMRLM_c4-mQ3YDzNE&D"pSf'ADasLK4zEs̤̕'( d KI&!{aqNi.ٗDR7 gAB4"H'$NKKK\)R`H :"b*W1hSy}[-(8$Ƒ[V\3o_/ -(ry"ui QYh\/Dx!BP5Y,Kմn#D՚^2b#HBeHA& 1s_$I2D2d(CⲨh&4}ǥAUsuG44,wpͳuWR>o+ i2m%!H#m$gF'$OteLY|s:{<ߗ(峾O\YoRWqMMۭu 3X.;յZqf @VN5 " ke<=bʖFRe3.i]7(9LZ/ t!GǞJprf@@gNW)+Kd2w1rґEJ2@/f ݲSMefk?:~4OiHFa$w6NS ZWf+D#5䚑~3Pa!YX;D} 2Ȁ"N 8 yD9a&h!4<Dȇth4$ɤ ^Y-ʡV@Z+EIAcv]D @o ) J$0Ko%6&J=QCKXm@vXz˘X[I:9 @[.J+ˆ).u! k%U,4Gf@Z[ UC{S |z{{:GOC 0vjvE^!%{'[4`Ik1$Jd6aϕ'#nxǃj圃Z:9jQk(+zQkc%Ne^,̏3.eɝWY慬=m̦e ޓ1JNf<}AYeSQxϿ??ϒߗ2^:>lOc3+@ӷ x|K}~6f2zG<VrqGuGCN:C&oo.XzPyw3Iq\Żէg%֬]ߌ_\,ӱdzϬAѾ;tg7dF-^ij?ryd,e?I%CGGzף_;C FqB=BU#q;ׇZ* {mvXŞk7c5Lxad160jˀq-aK4ibqkJ痿;-:PNj/.~\UDSJ)FO)D!!` OutY]Z-I1b9,ΥmCː@(N gpb؆d+x K1Tly(M,2eΊl|TNdrb3|ߡ`V.'կ1r07Sy2xLGdE|pzGI,i6"M=c.g,߷2@hMwe=.ۦT_ʩ߰%Y>;dzWmXc:{y٫@L#3,œ4AfsӋI#G$p|L?bsS G^o-=y3_ ~8%ss6f;B}L^F~WHx[y&w)[)\s-<. w-m&qTe]gjRY@CSDҒSAK٢D6AݳU#Y |'eS:o=f7ڡ%;DvkF'cYwsw >wsKS+*}cQ5e묏  փ JYޕ0Ή$(O;C%D~ ;N:ײjuhl@gŸaETȍ46i*OXMe#h:$9%PHwߩםCoWrůo{d7KZ1ZOpԂK5mUPR9ka{V)lT(#_ =pG-w9չ+L:wYT[\P>8 g̶`J m۸*Ѭ+Zeh 5nQK "@W|wA:Ù֚nȶH:Z# (VӠ_+o"g-PE-lQ <@?wKNR/~&_jk$}5^=TLСbHRLˣLQuu ߟQʣcTE dǮ{ |euF@Ra_SfH)X~/*HϢEFMlӣ{Fb0h˛eov|r^G4YrVZv]gT &kn7wğ {7=xR2@4KyrZ{ݿ[<.@iͩ(D`Ϟ73xN}Ҷ!`;hھZ}w5J.e 0SlІ~"ϕH7wEI)#Ȍ8 ܽ5w_8P?~h{֔/*_iӺ䋂l0Z:uf0lI'0}Ug:-á%iX7[s?w[s?x?)lQrmp!0DkQuUe3c.lk0Dk6F7(+5wZ wkfmNMXdž3!k/J-t< GgPq .DHM$^./r*4x袑KELwd$ɤ%w].RQ.̹x|]TZ9#[E${R%Np,6ZG{$ a RYM_ {ox xb.\NU~1tlڼ.&[O#zG+9'JD,MdճjC@e 푹[ \1N_ʩF@*c).(vtE WH~E#^5o8pB?ݧ*Z D6Sm%Ua16*)S`Sf0VՎ"J0Sʰsc'ސ)'JXpcC$3Gq($k#ڟ6# Q zM } $#q`j>ڐ=clH}IЄuZջgBBdɫ]:ɔAK}h^0!x6^{ؐ<أ8eW y(-(V6I ǽͥ?#wY:)c&Xa;>֪5r,y@p&,A: zy{r2!땙-n<ׯ_^.YGU8}TF3uohuhHY=7u,36Z@ 2`@ic5<o?=U'eUC`,hfj%)U3Jbt.{TIl60n/2`'12ބ`ǂyl|ZeuF#8eC]'$q ~ ;M#5@PGy%x+j_!j'6ϵ0$mFD#Ŭ_wE69TTZSU*ooQ#OZ؎h/B<'oZCR%MB7R{ZBD*{WN]y"rq̀s(|y:WcRwwPA{%G˰8 ,\eiD Q\_*K,DD˧.dpG&c|ⲒkhG6%ٜOsɱם%N nreԔHظ9+C.Gg Smp8_׫;{.Oo-)j'׫Ù)67-$'MCɔ0٥Z)jgK;fiPVyC*-l+O{~```Tb,hhV ZW kJ^a H7%sQZ]?XfS>3tpJv~s+Q(~CR&ˡ(-tfQa҇/6NMu/rH"TF69ѐ=cl `@CsY+.9ϋAS rnow{Rf &1Gi)Ue ( !8usPŋT쩭c`Hcj)T8snm/St ^P3Q8k BNڥTy!h+n5 #Y`%yM۟5v^kphG \,T;Ei@3pJ >"sGs0ns3v|aMI/ $M&GEM%NZLYh%·Ȁm#o}P/cl}PrP5]j))F6va/f{'6jM<(+Қ*cChOoVtfUTqʙ `KB#h6;qy=⎵Fڽo}@r<Ϳ}^2N]e0p 뚵bNaytI#/-l9'ZN(d1~ܲܨɄ) ]Uh(]r. 7d z 4&(-3&S&HxRWH]-ŽY/#L&9+R<۵1B :)E=ECceHP'$Zl aqլ0{;thR2 IU@}U6(;~:~{Jxwm/.shQZzg" x(JK u\\ Z^88Idiрr¥5:.@$#-wfhh {ؐ;^%L`ݪ[#NήȉQ| UwwŧQ|*vhj5b -K2IҬi 48G+eWH'NM~͌ZVqv@*AQ E+Hj%"Z*dxۻe @0bEsUa#h[n% D1l 4.U-y[We'8σ;e6CGSvL"u™'N%d[_&N8[p-8Fa̚~qM'/4>˹>UhJ}(}(3y0j=PϠixr{!C3uNNe ~#QNQ:<6usQTQ+%->@ 0YW#YPڶ%&F>h66Шڽ ֽfN_%? jޔjkѰk2αϿ?򨪝_95ұ* E#/iftT_]gSOU::_ ))u" jA9#HIT%PMkV0 ]doQy/]m⏮6)q6: x PjEO&y6]mr˦MZTٮw,(MC*V5*=wЅb6+*Tp^::['28# #8G S@{!kw1]izHo9wfXqMMJe)ze W"\yәZsfppTSV 2/2pzorW_mXɏҝ|.rKwrxNQ 3IhɳlnM^S1YKJe,>k'e=7{e]S We@66FˀnPyF)X2j@!^%PPN2]Ց8܅PVfˀ37pL I븣YL117 brsw/$f-¨u;S3=tbIhP+;i(?=}P u8h^ km0V¢chR$CG8GɌ \ ZQDQ%V57BVdy<-\E)t:;Ci%CG.9\a@ \3rF5΍4 ə:Z  i)H͚>+*(ɽ]tSgu(PǢJgtJgt*K(> R UBCFu^Z8gGIlV+HMkEA)V_1l&btߪBym" ~E +#9 ,fIY8:nVAwyPRhLީ)NxEWN@NSA0PGT,6>L+FyZ O!rjn\qZp 鹈Vx թ@ 5Bv,m<ѱ671s,0" y p᫃Y@IZ#1 W*]+SGӁQGG¨heMW3M̋UTd J.[]?ydo#3 jJ|wSv^ /Ԝ%W2srd])зW!DBp;, ;Juaq K LT~w~JﶟOg RN3e6d(/3ޏ*x\/AU7OawI'?o,? =;廿?;|4t)}i<:^份g/_!s|z:`z߯0Ћf2.qAܝv;NF?wEnpc38xy8gŕTkL:Ώ+[|8k8KxaL8byiU,g Jk~HYw:> {fzL>_ۚ|uk<*ݱ}pjNbO'@ #/&%›%~W|)ʸ[>8O +Ž_4 UWu h2nIO`ˣχRi1{ob؟`7 Aʲ.ۇ GӋ3{_pxh4pM2N>Lk?A-?f ;zvhk.;sP>C}[6驽`oYE|5!QwZ! WN?7'>G;9y6YW>y` $;CPHBo3KHMFp.wCPbqv|7Ɍ]fZ< _W2/;z켿-ܽ,{Yr-쾠nwo>:Dluna.mi ka^5`L rp.=spcn91\ eie(K+CYZ+Cc *.2O0e@F EYdxr6t)2 lw&ONhnƋXۭbN.vH}5GeHn]׶tԉb*Tgf Vqwo"L ʚ{) ݹFޚW%fШU~⛥,f)7+V*4)VU0`x$0RdctNTd#ZmxkdC_5O)RJqRRS8k`FRcg+-Ve;$+P!L[kx T TV(0Bo!P̒,hp. RTSx?辥S T` s65s4DML`J'Pқ+Ih-0mP;w6l}5 yM:<^@=UiPbƢ9+09 ?Tw{F!4hO `>Y+y] Ql8- PMB H4A.Ktn S"iMz':/o6rm/Okx6^k5^^'KJD-}FK.SƜC3Q'z&8 -j w7WOԮU]bׇ8c 4CVbsB0`k8*IZNS <چe9wWSOU?UTAf'KZ ب"GT$9<}&bdD稳s:e8Pn`aΰ_UԚǿ.jSuKY#{ "|Ox<;A/ܵR Cv`RgV; \8<:8Ȅ&@v]%2NXԊsXhT+bQ+bQ+Xsu]#g=:ȷ4 pc W6F0l5Q9J^EF#iɏ)VxOA^E`ؔ(7p/βQ<4}uz?y' (=tZ&@l[[+`A6MޥwpպBpbE5R6NfPRٻ]onhۑ֦w咶Z=֦6=QvZc7[2]{nC#7MjV[7(Cܣ/3 `eg5 ThH. 4(o-"ɆFa홧./e_][IQ1\/U4>[ zx29^ύQ&  rR'Qp.RViɮ*xE<%4Jh;ف(̊BG9wpOG4 Zj.JEMY:YY:h[Ʃ*礂&T* *hs*h,=Q&(2'qT#]@B$Ӓ9q"@ SA @n}"h4*phn}{Eo nooVxaVijҏq4, %EJCڀwkd*Ip!Bt5 y dTXݒmj/Sb??ZZ͕/rc4VHz 9I gZRv$&e92 A>`.ܑ'[X+ CZ[{֮cTA뺶 o6 c*hsr"O29kw6 &vĠ \h.ӴRn2UVYVhZemUVY[ed5v#99a|VRJ$SFK&9&pE]x#k3%^Nth|n.;=P)cy!gA&NFEAEj]'DnHކe9ߧ7UTSuO=Ulp7r,-Tَ]{|jI:!8I&'HiQV .3Wg2ʙ|o߄ðzneVXJb`%XCiM杯ROG7! H H,&-\*!s"+'s!m'Z؍J֬SZ0gZQZQZj xVhkL"- ՎS$'r\h`qϠ`W{} ` {\!}a+zvXB||l_rBQ+)$y1NfB+RYQc*|VgQ<~W`UV9X&7.rk^J jIΆ B=$3%2)yu(_&RS< AȄ`F?`% W|}ӏhy?{OIkAE`"ۣ|wQ1 I06+ʀNd?\qlf4ZE\k Š/j%aĭ[$6\^YlYas. i 7omo9_`*K2״p~3w 쯈I?~ۛ~o1 9=ݲgZy79y]}^֊_evYc{ulHz,Y7>}k$F#ȱ#9?L_|y`}0PJ)uNc&1\ Q%RF))!YəYK/Q >{ƃ>(`itdw͜?絗^4ה/]hswv:I*%(f?;{f6b/_A*ηWyWT ^tPVW;4?IPF␢p2 ;ei:")bHJy)Y(kX V{/+{|/]|mGF[4+7ߢzEIĈf; 7\\7 GލG#ߗXD )vKrPԂ/3rU#*Ds'`Ȑ2M[<1ͼ dJK.x4_,1K=oa3FN$T+'(+W1M}Ӥ̂0sdGIMHCNyϢ*&EΌd^M[fm"Fk{П\T acwr+>sw~ue.H )$B I!mtM\۰n~ّHJv$%;vv\]I'S[C6L=P2W#Dbfol\Wn!.pM֚,;HtU3; ,hVe. YJM! ނ^ѐ4*uAǾ5l|+u05 Йbk"a+ $"JeL`L)'dZ`lY0F3]k"(lrF(g$@@Fep3}Y..}-4Zmɇd1 #%Na Se0%c4Q*>(΁Ls=N(0KRѳ< *w 3JPf3à$[㳯&dcf0dG10/yN+9L.0 aiSH4}uީc!I4 д$ϮHAal#RҊmxЅfc^Iޝ}^LLdI XLDZp@KTaEJW)$)!TXt#MyQGP!Fwhִwqi~-WWfШlYy;=[NI Nbu*@4HEãf%A/H#ʶv>,f̣x侘5u>i,&q3wafluna3D(5dԋ9 8dmDMPr"4$g b(V4@JnP =(+uTEp&i; (.!GcQG%(?,NۅnFl?O۟|͖bսx:i'm>5~?]|ɁCQb tsr.&)g#^! gʚ8_AFÆpvx-EFu$d PZ@fz(ApTf~WefIG 6~wa*?~q.?#~'lΗ߾x$^\I7tyuyz9_λ yXRWWXJZNāAD,mʚ#kfau@yJZhk;}ˊ[q<0dI1 xty,$eL%13 }&Ck:DŽPJ;LPeSfͭD9Ig{׼Ȋ޷,Y繉Ot[_260@kv"zK0OXg9|Wo_9^c#Uu.fqo)T*ʮwd]_[H:[t(VB.MW2FFi7kָ8=x$IpO6[w;IѪɰ{ghKc?`"48;VJ;pH߉if T?whQeȧ;HPF`r{c [7W!IYE5QgإlkPGCm)lJ:dm:X)fj9+7Ĺ+Q_8,ߠxy=vgP[>3_^' 8%bҷ]tuuj爩;8Q/KU*tA2Q%o}%U9+)x ($Xl&}@U)>(n"MaȤb{}~FxKZXsUrd(́ ̘:zMt,ls[֫GRٗf%Tp^A9}Z"nn| Js?@{unv,o(N(ۺb~B\bvtӶwdYf\/W0VE yJ-BVT JBųdr+kp{U.YeW8c]+⭏Q$;b%w2pQqB,>)::Stk]1IP{E-V\UsB\vOX@(kQrq.Rqk)#6<*;:Gl0F8xt/2z)o_|zn;-֏;h0h3p0P'YdcWK|[;pt QJcHFMHAE,eT.B\TZ%v#GNFs3bgһat(i Dx` h`l&{vᰍa K=+,r1>\ZG>a EXxgp؈+mt(.Adxmwƹ 6`=o&R+FX<-F'k i*~;Sj2l |/4%,JBĊ [s6QYGwxшpK,\ɳ;QmiG1 *2.шZXn &l)M ~ qCfpڄ)01<\4"pdB[9A^S˗|6ep Wyi#@>Kp7dx~w/~rp{݋Ǻ‰oN!oV| zVݧ%E}fTO-۳ΖkyFv4F){8]ORz^}Лlzyyg/88`o/lR_/nU2}~*ԻwOaU%[ïhL=@^ @^ @^ @^dwho)&Zu8f-r%klo="LU[YC{~:eTOtjRH!%H%BJ|u?i֕зȝ,b$r8Φ>*ꪧOɧCѤcNQV=WpJ<%Gcz]ֆ,i a\{Y`[د1{ڢbx6+xVPfӻťvK2^)m{-=$|AJ )̞A/boL 1}K5usk]SG.LtT4E;*|0%e<{5O  cY}:I>J'i# EBX$$fM*L!t8*xg|taYf[C"R1 IEPR+LiGy#0 SeZ9 8B0Yc'CCKZ> H@:"'HOKcy:wt NmQVj2I=sAp [P*\Q)z7+ PqtVhIQk A=ALXTHY@^*w*C&Ԧ*L]C p s2 " C~b]j;5/Ɓh[y7F==)F^ T{KleJ6UCB*=S jnPFZ ,Woz8̊N G|H}y}|-~y{)ɦTo:NvI.XG6>8c"Xf̃ckUBP4-5S-%e žzR|J0lVH퓩m4>K&8RG%I^g >gXg6,3wd 2RBx`2?XKSۂeТɓLY}%1JBxE0\^l Ni80[OZJ-ARO%biIC!8\Z˔x| P䬕<Fꒃw>3"[B!=5Tp!B܀OyPY'Gyi)jxPgn|`4 txz8XkB9\@[HDn mlZ>MMX%Zt͟y2%S&Q6r "\\m#H#^kf zx7&ՋufT"V6APZm63R(H7~--2Dƌ7Bz?}}S_}w^$[޽!DMc;@/O:j!xS~]ow%q9?Wտw[D~ lu,Svmr-i^u]?ٮ,$p #&Z0e#q*>\_榱Ǹ\<\Lpऑ:j9mr*!{bu%~l'[!<6d~OM)[ߦw_q*;tٚB.JIWq5ݧUF<"ɝ9cCP|(x|fI<,e "ɧ:Ah^w$y.:j(VG)zTnGc~RDv3k]2{ݻSA "q9_v{X]P߿r j.;_j7ٳ.*sls]_ G6NylfH%JRy^ceB?S&ϑq{5yj3Nz~?=ǹM ۖ9]AߌWv.Xԓ3#k|ߨHqW㷩=27~]zs[t{;2IဿRK9/w& 5Un&i3dWUݠ4m0Zf%K?~7Ix懟b'7=aawG^rTK!rIT{#t!ι}#8[ZP߽9{><{:q'3w<1?˟Ϭ'lӆc[BRcܽEXO[>?I+^sr2jclk*@:Tc:"}<ϭ|OHg ţ/H/x-Q9_ѿugz> cbW϶QGuc~aK'58iXSX(Tv.YGZ"xs~M Yԩ,uɏ2lgqwqþ嚝rLlKnbKf:м!}}8棞ԇ6#鿢җ]xmʇIjjfg6rIQFlIv6~lǔ%YHPȩĎeݍSq \NA$朢/zm((*"ɏHˣlk-e썊9?+>z)ujkȠo<(Qh9M)Hڳsʹ$ ݇"C}(2tr kLʒIMcy*Rj\QIF\LHt+d%xuUWw÷1߷sl TN0a镂>#IFSYˌ$5 #W[f$Hg[yc߈lnnKt먭E(=PHO >kΧ‘;}bkz=8`n-Zi}ڤj A7On j:cD7+qP*D8%VTVDž+n++ek}^~Ad~E(6bH9"ieNam>1l[#]*= J? ړ~E I1L\0yBai1:,yxy}-!-̩hʐ&V)C+$*r7/ggY~Ddȴ_;pqLP/d"An9J 3pQ$5-`/h=$odc5l;֞ @5˱G"TDe /Jlub-iJ9D }+<͙m:sK6+ڈPZhD}7^{]I7 {x 0rj_/yP$tk)"OYвSEY5J-($C4FPژSXGǽ \}pdW<+xzr,:vᥳ,GJZNҀGwI(;PIL"+pK:z A55Fa`B~^_(FI g޳"v\XES*iHXC(0^کPC6/KmDSA^_`rb5E"1%t (ZOPb-bO廽R؂@Aić9GlfBp2m6#b `CŲhСuq}{k'VpSCg[]lm2 4CZ"'ݠ8.fkZݧۍV𚱐vuh=1ygVZ&8MeTGfeOQO] uwtqF:HBc˟hWh*'Up? $*npVm%bJOSZa؃4*D{ Fsy7\\**yUh)8Rm ߓ5yr^<t9kޥC{%2cE+xth<{m{/, / hxOK>hxgV|c&{Oo?QjM2N_FWhagwO+K}vvV%J(xR&~ glKA2t銾u 0hw,KmGUۣLS='^ZP X"Ȱ&B&85)T#4d<չӷ&ˬ AZh_(4zh-aCxJ8|˺i.{X!uݻ˃J bOx$񓝡>Ń(8fLRCLP>iY<-)RP.0d ߁h2?[rmj]Sm,Hň wyړB,0䌱qgr(e Sʴ4a 2+A 5v*,x\4̠xeEXӞƪXas.e:TNA5~V"Zr<]~ߎG:^1pnʏEd踾f#g?|Q>]fI6'ZgԖc:srVgM|s.E*Ujnjgh7n6K4@~G8-NֽAr0(OVߵS镄ҸWB B45S4Iq. -%vD`Xr+}y٭l2:1/S ktu{V7ڃe/(^.\oGbIQG-ޔ^ ϊn x].Gp+7Vsw7v[t|tiEOS'(I$T5iXoBwuyt^Mw XOR ]$D F5xtlr`$䵴n `Hz5eNd<${e8o'4$tp t1_? +=87r|Ҷ'D!Kzv"`g̶<.R3J7[{4Xko/-|3^k)`eB2DjJr}^b 3rCoM 798-$Dbݟ`䩾y喦cNoK\qt$tHB{0{/z֘'-ס V*Un|x0Q dmЕVQ-0S'+R-Nhô>DSW>Tf SA2T|r\ԢNUBZaXssvLlR1pjEzy@?Bf4 iBq#5J"(LFMLKs:8M i20)Q(":*5-)Q{s '<(=6?$eE6Ayn=3؞pl>KX0"qaN F@r@E|hR46)mќ?e6 9J`ɂ8Tb<O!To~5P{P 3}I/Uw02(o}?jقӳ,Q%$ʤs!u)xzew-.X|ڼ0|dv7./x.S78&g\smwͥtB  ń[IZHYjiplR4tn<C7$r"A+|_I0=c &/Y?Y>DgtN6U\V 6VNLxRrrWuU%YQXŻRDb r"4 Q`ݘEۭX[̲L mAyNc 4g(4.KPnRL GҮ6n.'3ٱFg҄F_O+@~ pB\Dcn߬C-[zLMU ` QV23YIDYۿ cpzNaA*$0^("(_L'!Qoxi=yd{c*i+itJDZ1Tah)7`R)EN \3mSf82gD2 ^('DЯм4Qbx)LSٹdY J/e:έU)kzPv)Uj!)75YODb#ώV*X'jkՔ;*9!|"髧Epe^ U=*e,GA\gS͐*ReEhEItJ[C#ꡰ@Z$?FrNp -=پ;|J_^X#v@Ϗ[12]3DTw4lo<*R* f+BJ_Âc=IOP34DY p 'g$hꦿWcgLi!L'0O^b'.lŸRP\7ӛc7W$Om*@4q2+T r_\["`wO' Ɛ+&@4&33 u@ E5B$!h/>{vI9[]”ܠ꼋,gq#Ts%Ӎ4Mi)sA R[ahj)+pd<|~n<L8x*!nq}AYIK }LB7Z-n@UnbgXQ2fW]Fi}{/:R޿YM}tWvᶎ5s~fݫVkZmfPq>?_(7NV {DDž-h痓e! N:&&tA8|B)9+:)lpaDwU<ӧ/-}8^ ktU]բJd-r~[U1o_vȑT3DіSũSAa)?ېce +0rXh!TU!vDC %n1.?C#Z88ͅXDTS:DNrk׮kr~no6wA*g;PӵDm!4eSSwa o[gr5qJnݳG?Tsrb{W AE*}ppG"V) Ȩ<<ҊfxIEo|gߡt=B:8R"5*dʱ4RYčbŴ%g5\6UA$BnBVձ&A]WhBĺ/h<.k߾8oݺhkUQWnH3^QOFv(L3 AXG^W5_>Z|MUQJR"JԚd\#{|&Z\ ^I6,$Yse^Ad:%1otqy}rVVoۨšEܰ{1ZEVTeϿVjX"7[nvfoN|JIMĕm T_ԤC(ޘ`CQ 8zee"ud䱛&kE]3D:q 7:SUcqoź?2lSRai|ąf6 8Au#0 YmWV+w=C9Ws -k[+ܛ ek+M\)xCı YCV4M, LA"-{hSU Q^b.#.(Д&6<ڿ%ٹ> 9 S,$z-;):#s~膞7yPq~~gǯ~ hۗx?w V Ep4A_߆"wXf]Z\%}}6|wtv{{ * ^3O_|tzU>+b~s{4~L5(>pO9⁣OPY2L3*`#,7X^,^ԂtCR+e5\d H4ϮW n??Bxn[Xz]*i,R}x^%'3_{fϮy0_W{{cШ_ pzsɊoe?1BHp? oQ\'8K?@gzE}} >N:ʜ^^1]WYY&h]s)P:z"׋qozFkzRgφ'#Pg-?~-ݻyLn6|BI/rQV/VV?3 f"v֦2M>quOu?ؿ\Kr&' 4EtUm ʍ!#ږ?݊ADc0:ނ y <7j@:gB+ .SRUJ0Dsܑ`SҦp=XO;kR+7C %m4# QXPB #4wUGA5 n|GpމBqs©px'ފ;N|BHho b$<\̍]>sdw|e(-G'g.JrmE {r}DWkz/B|>t}֦G1X&h$s@uu=߆ `< 7h rFv*3'[FfhBfa*vkstt3k-]TMkޚ|Eq _UWۺV L܋Fn{nIt%D[Rvpv8WuSqr/ZQu2iePr%VqP2T4xEӞxJpǜ,LIQ %St%8*-޲v d׎$:O?);@|%Ȥh09r %%njFY<3:l&&i?{WFbgFdaX vݘ}فgݶxY,,JY"-.܇XŝG\^PP凧Fric-5ݗMhD˿;lalnx-X[yߋy\xs˹!;I7G@>G9vV?Pwg(䞞rg'0\5׵hp,1 ?~1%KVZaּBg|.~.?~K}&iύIXc'~~t6=k$i]oT;.5]ɂUL#ɳ᫏a2Tf(J !k3GWtq|:EUD#'Wg4eO˩= 'tj:|Pp&G>yl8<G&{}Dkt/+٫S;?^k¢0jm5ηΤ\߯~LFRQwz{BVؿ[LhVY_oiخ1YoyLյ5OR)|G_9Qm.l]nw6McӘ/4:b}zsd]P 1JHk1Y#Qd H$I)ȊZũԿ8{fyrƠoTpQ$ܑSs蠜L $TƤT ^&S#tD%,ɨduc^i4_mUpAZm ד>дAߔ?5-@dM޸R( }( @-Ί1]c B!(C{cPa])ThȂ$+  zA8~?*zd7LDtʭj>l 8ѝbqX`ĢIb)ȆA I"GfWK,a~),MAy++Mฒjwɾ X|gK[Zp[Șd$w .R1&`qLrshB΍j34$qb3Z9mq җJr5;!5r7b1*E^3"gkh,!8~2;SԺʡIQ&%Q+Om`"5rֹC ׄR1E%5&#bB#PzgZ8SKS'ך4,"G!-re6,BPփ0%ajǂqy&1 )#C5 ]pj#FwӔX }(d)8Ds"p9d*Eڭ!k9`UV輵bBcK+:RZ ()*<|0Jh#td:djHJ"(8}GsЙUxt씌ٲ9ׂLM!8̘(d2BEb$zhH+6+0$mw8]H 9r>B1+/W1M= jyg"4U@; =k@g u#z!->}*٠u*4=VG`VG.,QVn$sȖ [p#kVˍT ș#ZSN*@G 0hVpNս#Yh؄ݔ3)u+(>J&a^nXʞ4еnX׮hcBГz?]ߙ/ֿ3O|]!('݀p^d-zz ȁN)BMbtmˡmvΔv?߷H4;fF;VʢEAlRӅ:^J)("n;֫I3y9c |]c{\)TEhI;S|HŪúL$\s=sac2HH| '!6kFøɜDpv'@) H@ֻfI1a;Z$!]Wﳯî.CtF-wV<rD /Z'|[Ac/r/^[ {{綱]1]n]&jֺJ:..ԝGGGG=j bS}&b8p,q2JOc>egN;b@ 4N[`i&ߟLN6ܻ_䲿ϗ6?CКPaut!h&M x͈իm jFD3"z2f1 b)Tq}#vu/O t .ڑ >A*ڽ߾|TeL|c)腰}M>'AܳG ܹe@Kp\^@H8b?: ÃB0PgY"Ŕ"*Vȿh zIZjJyN>F}YarA|5?ָm fo|u"M -?ml=2mY0<3A5ixb:QLȣ6RFmj9@;d=jtJ /o?3k5#KX$!%~%F+X'GA%+o 8֕Z5n;xSP/8Cn(=%t#ǠR{:2Oʘd2[#Dca>`lw#58B@X¶Vq00Vƾ;xS-mCeS=> /:/dD,MTp1 wrcK ʄ3KL2&bԢqViI9J6HH:A%'"d[ҵR^xm)aʥvg]-\i,JC09'[֝M ƔP2+'Hka炈`yJ}I`g99'W-s)-lges$\5G|v`k<m)VfP6sv"bDw2JsT /pٳ+_"cMH(??){HS<]e2O`sVXQYL8-D! n3Z -ٚ9SY SH1N:Ba-->1};'#Ũ%i$Q ֓ kcJTNX6ś̑ő/dp` XV+_b|q}Ӝl#NS$$S7ZĪ8yFP u:4&+}4 iYS3yI͕L`@l7<W9pR oYXfcKџƿ>"5?_>Gp?Iܦ6C7UÝM*9dmN`06cҒl_*!%p$"@4-q}?0zpx+B~'p8RRT6%$ o-TW6EQa2"yn5oW 15){%Ԙ x$xٔ E/Cw.U?|'ʠTgM:a$OB+0` WD-x$j^j#Q MAR+.JhC<5 zk(Y.GEDvQ\%?M27kΎJ~2:r٥ PN%WŔMJ:-YBtLuqW9$i~'FI `Bhxj@EW|=xN1lwg8;?U9Z3V'7]\R&Wөׂ8|;}g: uٸ~~[gǙ`CvBA&}=bO7bt >zSP67|QNR,QY &+բmM + ۬ 3$nVŸ"8cX'`%T"T"qQU습#;ԧJC̚f@Gt!c%Ew=qQڇ'.[!*AaFY < >|-,ջ+RtuzFK]QyKڑ\%Auxc;kQ#hmesf$]H@kJ>Z"~ 9D*!BP7D'H`1фQC^q^N S<rKAF]WX.ysv Vԓ@~@xӷ'ͽ:'Wǻ.:~삩; ゘=pd*ZO-\|;X4('+aLrqx*DItem4-ne ^w ߖkLqǴ:o'm `u#~Vp4n6 @Ö\sOǢ9jgbPu~ }T+Hph~ċ?UbR,ovbr4T|oLtme(eID ZR.9|+W[JuQ(~Ku?n8e@PIBR\ P"_)p7x}l80_t8ET{RGe#.WrBxõa1 9!*XS+;F&'>F ?C-ŌU,'u&b\:]rnS`ADퟍ oyZg4 h2~N߃>J*f5֖= 3(xX*Y3,LbiB9D%*iɕ:Rg5\J%hd2  BeLz]zÍt$?QK|]?FNx/p["wLPw8X>8HGӒtWzj<2] HW˾WG|y>` <1j4`bYR|~$EȊ$ILo LL&PRLp\vޏ]fxC+>LM(Fӹ{*Y0`0yIl "KU Mp(D5nSuT# iJ vOe3."G4CͿ&+NZ\ g 91?;Կ*[4wu>n뵔_/ĝ-D=JXu+Kml+ih@`4*itS eÍˈ[gxu]F,(.\2;4iB Z:tOK.-w&^7cO۟W˴kY Wd)?sOҔY,0E]ϧ[*<:[[ NDULDm (m~[]nj}٨^EBl3H2(B-N ARf!l Ws±URazsvB]v7IHM e7O.\ ;5}eTŋu*j>[RO*~s?AM'f}֓Òaax}1vԍouꛬ7u%q}nr-o^{ua6}A޵"? <ӓy^ 3TȪL -D՗Ϧ¸]FD鵽FB@xD*ĥfaY"/Qy~>Od=ي[!ֵ!TF3;AXW^gGwz̓Dċ?037͛Y͟biatLVA7^ F@3b,_ n^GP~g?KzxrϾp3fU ^^tcPW1B?5FC Rj{QC^*(\% p!%Ԝeq8sYP,["rOtՏb~'ʿ/x( B1~duyx<,V>n:{U^-c5 ofT*̀*‚`V HLK0LpWƷFsy@@T{A5?_JŊ0%LI KbouQŲ/]H)N(hBD%fKC ; 4F\(B->Y!Rr(!6u}"dS%h:yKk7y(t6ka1'Jd[4#ЈOF[4hӧ]8يiĀ|W7NB8o&(_tG[!2 Q8@]KHU)%WϮB1mE6I`Yer?¿BK䊛+i}8{31!)cc3 Q9D)Z"(%lNG^3a((ReiMT@E qiRHC8$e(`>s/ed^6+vt9g 4EOCY_fu^ }h)ty48;?X4 c8vҟF/Qc1[>MߟANUS5SkʈOW19R #r֟™{]4TS\E*U 9Km70E9׹d랗 Kyʢ(ٟ贸aKv6nwC'D9t![y9 kͨ-s$pP ,ڨϦ}AS JwOF.:A"q±Z@ % 1E%?O&b eaD7=tJEw.ԨE*1*ejHw~9NC'!?tC'!?tR3N3koz2 !~ml7Or~Bȴ LAR,}V*TbىW-u%$(Z#zX7 ][" 5o|Pd1>p sYc K,p_^K.To6"6f,S/#'H%B6O,"M4xĴft`)2\gz6_A2}`g#I6I"K3N })QS$%Gc;#]UuuUw]_)rt h 5X[o#.<5P\RtCp65H'yD:d$>DRADOpWH&Yj1!*nt4d1̞‚ϴ$wfBVH uvHAI{u"lI"O!½ਹaR!D2!O"Y g g&jHZ\jLI["!+x^j 5eݡNx)Pi /h`dƛl?$A*NPK-rb$*Y.$fhFTܢ4uҁd0ȭyd ;Dld4:YMڍfCԋ+Ɗf4$LmeD]D+'bȣ,!ې 2%%u[rl jzЪ^3=w~V朸Q0'ˎ3ow;өfnO7ukC?iTҨ<4n%LBR>cuZxg\%Veie|cK\3fK[U_2#Zԑ *CȺ'ro tzs\*;EZ|G9/KﵒY0Ε.#^3 eJmm|uqe2`⊾.#Dwa>ngO心[>x_J9\dmYk$$f+t #yW˱_=ݮ`d5fɳ'@am7׻~mۈd1_i=Y'-W$b%[ +d7cKT71TPD!̇Tq+Υ PWćFa0x!7 Ξ~S8w~*O/3EQ79&MdL>5Aog}jIpC]'éQ6;m7c̛y\"#cҨngGfxMˏΥ` utNZkֻ`b8J[E鲣zX7e/ĢSɇE_t9SvqT_hHpvWoi'+8Ԣ3 ߝȫߚ~r:~V$إQsմd 9(Y &BM(5HS_tֲ眰Ƶ>%"KҪԀlo 6/2V‡D@dfI9j2%y I&Ѫ]7C6TD u~׷_꾳#;PQFKDYF2"dD-\Jl6G)¦.(|) :q\>]Hyh,14 x6hJ䓛e]p,2A>+N#N |VRd/IvcKPq_iD*޻~ړ7l$nB5\6w?K&F8>vr >ZY7uw|r6F'0 C[Xg+ ZjiHTFU*"/v W[&~qr~w><D% \tJA6ÛuEe3R3ĊPDT25g.>y}ni4Eƫ[Nf49wTp"%|8'BI:tF)oaZѴ#Y|*/XXv;' ! ӱ^V Sڪ [=57OvWEZfO6~I`7r%FݜW_ b|.V zqwiZBo?LKo<6BT*B/gz)X+Gd}Ruh{gw E @!TksׁL{|_c_m8i͊7܆lo}7 ,H!35=#mq9oiB"? Z$R"AP*KR}SŨļGqJ 9s~gpm{DA[d`k-hT]͓# GC6nf:hFxIh3NbΜ)(e_a,NSxRʊC؝Y&L,c!jy ;#myZh1hu.hY4wpdCn{e(kZ.=ʊƖפh؞CҨ[CXŽQ#kۈOHtC@%9ǺE]9jfLbK~Znm'!F bС)N@l孀[RIk0jШϯ?_cJgɣI'_>o:nCy S3ϼ7iRh+yHd>IiHL*37,T(̊3dta3e=㥉ju G6{ c|,_,\zy ?kEgX=z|%V1IZOe){XjmJT9m}q E1 7Zd1'I$V4tg>!)!R$XPE'ZJcffۥ*9u;_J,~%FaPc!p}/d0J [qGcxb['7״6yŷkA7gF tќGܐeH~->_}:hE5hf!D_r# >w3m}̇(H_UPNg'I8/m(ت\yF7:RWr2'4MR䅳֢,2pH ZHE^+J,cȝ:_C97FZ *J!V#˫= _mt@Dk-& ];ЅMm{݂fˍQ#:mC6F*l![tth/(u~n+nQ1J$iyG{ &mٕ(7ˮTc5,gTWo/JXedK8ww{,xB{`2^<ΰ[G=77t419puTvx/ꞞܟBo!4sak|[3>-ۓ4iL;a\rLt{Sa6w%?wg`{f )z`ĄLl!ҹSQ6yl|דuQj!cbΐʣQk(+WFSeނᤵ:r o@5ytEٔo5܌lC6By*n\\% XgMB9rsE?icwOZ٨9F޲3I)Q<1ܧ7DdբUcjT>wQcޒvΧt{ؒ+I=,kA` 쀥n#ax[R~'%yКЉd'ȮE+u;tEBa&m ݊:>J +aOmNE{QZ7k/z29@ج}#&g_b;5 _qɩxxa }z<EO{zn;Djyx*YV׭E'# t0D¯ǡݒuuJlUPTbG-i/xw\-YQ\wrɉuq!&$C4,2g9Ye;O#eHeT4Q4 N{o5m3b=VGAҋTCr&157:(zJ|Y'H85R)4l85 ?c[ӏr鳎9]&9׳NRfB3"FĎPp-T1_5VBz)f熽:T&|+u#cW|ut tuwc}iEKUqo1d 6@+$|Mzs=:ۑ~~fWέ~I}&y"飂Di11g~@AcR28Ij?ٝ,5*7oFҘ AowBn<p6@|۟[=ӑ6A9{gݼpŀ}ʅ ]|7H 1!_*Z4"P㍌IwR%Tc%QC8(2e BcTdN*6﷎ZùRr5I(`\ӏ=˩ݿڱz_ߘѢo@݆ X+F\oǜi00LDX;'1բYGh6A5~%͐N*r9|ҁhd"U 9ጘ_UIw٥U @)΄syiR:бMřt)_c's:V}%. w78@ 58؅\ ˼ĺվcz1jn>](#ό^{B B1yC;}" {x+dr IGM:4s<9ZLrё8. .DpCZ'7ޯi ;h:_]~ݛ7Y]Yv]Yܛy~\UTR|uu<_O:}Ӟ' /Hm(hIZ => tJ{0=(NSL;H%@@4scv]`֗x~>ѩ?B ..Q͈&;8hRH*fs]KaNZ( Nچ6Lu B?x6vU)’3R3JΌc_E4Ъ{3< Uv[(?x>h21Tbs}"^|q{bsrmU`L?8[eG#;ys:IhgO|OڵvQy&<9*ߧoʠ"8rA9Eo7y!F+]6Y|cV%hw1cȹGhx vO_nò,(㛩~C=yĹ3r%X7_nuп_ڛ8t*™fcp9ѓ2^WT(L=+׃%QT1Ǔ,FK у9V,6n?C9v,@|/]-ˆms/r%Rvs\n4~g~H] Y]AL|z;e-3v5!fiY|oFON˘DFt1E7)Z֓EK)YY( j$sPL*a.L jn>8/{r&cK*BUn:fxM )6J>ъ6ʆD<×r9f !7:9CxPGcLwi??|e {|h9 #L2|M 0:uS;ThjgcLVyojvo$_y(̨l;29Zs`Ǎ:9UQ1)&: rp߈#fO?.nFOE^Z4s kOºC'5#}Q~k|ULۋ+Bz0GvHgfi1:@@xǨ0>,M 7!~o5zhpؐgD!`܃l̐Fs䔆v\x&IқE湑轸`C(Qq:&R O8D`Rt:}ݻQ *owS/oQ)c)PEdRh &8.R*t"XR'D*\#/?~{K>׬ZoufSxsq ME_WSTu?w/6(o/1c[ Ms.UIm<.uqJGfo7" |[|Z}X>sE^6ڽyzc =XkjHE'jѴmGĚʱ}?_yKkttq!&i<$ Ĩ[l8 ?~?549j!@Ũu"FN^Y/D-X6'cNe~_3[ŻӺ@s3ډ.Ч||o>2lh~sNmk/z_O D[7@?]u7/h?۬lwp6(bɛBU Do-,nRhpK*o2O)1R#A44aLf%ycsPBbUV;Q*?PlFT{|/selnR X˂0^"{',I F=y*"I`Tc,sT x.b?j'?]i>=ÏB\mOaَ6X2KNT %4jh ʈI-D.mIn%IKsr:>b ysasAʝډ4B's.S$.;8]*K c/UP S FsF.#H&@1->s;Et%$FL;NtGE`; 52@Kz5U[k{2aM6aM6aM6aMۄV+PQPI!Qp 0h)"P㍌j7{1!8?B$tdrL X [+"1MDAe0_y=Ex-^`0.}SPH-R\3 UI Gs%k#SP)b /`p@ "u$&b85x 49apLiG,LZDPp!Ei}YQC=ФFv[o X@8fD/Rh1̱Sɉ&'8lډDo\$J@ى":{GWX . 5 DD?L+ΆTFFm)VX9FKmOj{ȩBTe&F#ri p¥̡!K>V1Kq0/s"%'`m]zIzAՀ+ǟo&j|cV,*oj}_n+{lo~n#S.fi$xDɏ ߯ hIaBW/cƐz05A,~Ķ(m҇e E)!SR .R>G+RދZIM{E(&qz@i= Ckbwܾ4s<גoksg;;hӪ.1Ǎ+| v[8a'_.ն4D8_Q5b-ɑ EUY,_cg6{Ix^W7aeFPrʈG+.!aw2pxέZ?H\'q R_p R^q Қ8柘HJ) >Ş16Q!ry/f1HH"oL0Ay) (܂/&Z1!t5ax}{XӇc߿D*& z#h>&oO}l48_GP!{z `OI`bUg\IATcs֘G1yY#@^s>IhO8CSCݣk|VmuAX/W8鏵.[z2?)y.?7'geWv~g!ZriDkX9 I)IzΤ5ϱ3:ZP܏?PIh!>ڹi?JdkkMZ}PgϠSp$;u !⇝{Ǜdxպw}XOF[ޱN"p3+է =̅F8{ ŷozntlu1[|9nPIӖ<0z|oŒY)O 6mld-YkaeI*^W h]{ڭ~9ݏb`I/?TvYD+u[`Quf x8B`^B`)KM9;,FS1a#marI'T.!dW:ߝKLvtB5䛴f~91f2'wY1k _z2s հj;֝H%:TjXTٕmE _^E,h*5,⭮"! KFɢd)%ES*\tնFuYyG9 {-cVcrO Xwf-~.ah.ZjKТB VrZhfIa ˨ IEPbH0gwe1OkB`)|n8uKii @Fro:d!"xK5W ([/O]9@UfR* NQX IWJM -ߚ,9qDb1dqJ.*f.kr Q=c)+[hRF9b!r 6`,k#cjUf̔"2!.j5oW}]R!G+];tTlBptQ2zD2x&H ?<ܘ\{2&GU)JZ.,>yFn>@SƬ`kPkp=,JJG2EP^J"FpW"#1PK ΃U(c% ocv*1{BR ֭az~b}kN J%cwo7 n^ :U7Q?vwV \'c?"377r4/ x0&w#^a:ԚXݤ?bl{3/h)ЂnG%̊i7 'V_deWW l@L,ܧ;[שB@'L㳕mUDM}9#Pź 䃤cr? dBMr| "-ܥtȸDN,_L5".çK+_Hk#aܜl K -CQ;*;:Qp[GnSacrx,eiy .p]DpgMs$O4Ƥ壊+/ㅔZZ!W$]Fȇrw"z? 7 ~ LI:uec߶\mVdCm N8]UA)kx WLp@%j=clt|UkQ%9KW;_8?V[į8qx6`/w\E~'Ѽ_2DUfu`ū5?ٱ%WR j-#QS11ePŗ-۬ T8] +ʘ ;!VFa!ccB)Tbt )! 5>Ɉ9UF_G7S^’$eJe##2{đ'EL +;=,["fZÉ 1!Ce#Źc;$d;"#H=EoLND&Qth ٸB[KAs!uI29JaQ𼤂H*<LKLD.  {B1uZ09%AwXhc9L*U{xa%5C${X:d5x9*F,(cL(9Yl Ԉˇ.G!8/"㙔T^s"`%;$/AC 1A/$ rNEBpm _uQ[!娡@DKJm(],E%30Irg]PIh2/j]yЫQe#i bOSec-.z4II^q3"%d'5a R7,^X))XvW=3BԩA_&ۤjʞzR2_hRiX!V]+Whrs4N`8\x Ͳ~6܆ۘpsn l4- a0o,xp!T<@+ jP ,s) 8(NFb?|>,jl6]QA@$YX<&lXrS hi0كoNe<i]Zm8>PJ ҽE8T#Eޝsqw6Vw窋LJO%B/ݵZ% 1SN8(nS/ʩtɋ,wE;Ѳs\v6PedJE/aA-Jx!N9`R2h&ejo BAAV yrCA@)y@E Ւ%dn2V@ s)$ ADLͰLy+N;0$݆m}8QoIP l/%AKozYbɅl) IdE!N SJv+%HD8Q,سզނ1\en4ɓeYLɳ?c^)/KM@X#JP" tH k0vPJ>y)^Ir`U Pa並h~xB{*,%\EBVD z^`Ф‚3em MF0>jA5T76Ou*Ji!l"ڳ^m֊2ML r@EN1FST)gq,rtH0fMMpxއU2&%nn]QN8K&S&./k\.!F^).8֬ A3Ye)5͕䴉Dir!P;M޺Ny,Uǁ-a#F1ǗBm8X%iaK*"eDgN !eV_Z#!$)Q1ǔ{e"^aM711֥N c/3T4n#*ՈNĭAK Q$DŽ3Fz;DP2D5Tr* _vFZ3H`sC4dW~,ǻ/<`EpI,:a$?IZ NMa )bݫ16W r e-;Dy%$^^(u!QeqDDH{y1-{9 \>R׮/ELD9FEWݮ0~qA_\*m5VLU]W_Z*]z;-`6ϥJaJX,KD Ҕ3XM'Y`I/o4{,&u]΢iIH(=Aak'wP>;lb9mCsiO=62Vyo|8rq$'>Gx%՝kg Ö|FrV}pGD!2Z[0[>cfni0d=w㇏7{}?ZfL^9ZEmڜgO3ٝs<_WB;NN 4.5ޏw̸&Ī&Ppb}fkNN'*\o'%>.GDgv6H"$Ć$r 7ںH-v~IICm#R$5D(]]U]Kw-ohvWv0OQ 5vИ(sO.ehrƍ"TGNǻ"ڹ!Gċ^ν^)}, p9bqZa|mz帪29KO]%JO`(W]2O3N A1A Z=@n"ب@Y (V i0\d1B𞪄;` 9F F=T sfG/^a@hd-\`ʀĎ U\kPOV\SXGj5)Bb}Rc9, a,x,V$oP KTO^Jb7jIg4&$ȉLR[h{* YAWʲ/ k,k*!sS޾/:2}ͬkDp^чd_8WVj`|@1h>~tm}LՠxT bNb U]LG~2˟_w&5(2- Ar9mUisЋF=ӂrH`W?>#7-m&kt"Ri:fw-*;v k {}No V̈́xUiJo9Sa: hV6Ve|1n{Q p+2^ǂG,uP\!lMGJrt4CYjÄŏ i c ұWF?LEmqak -|yfA86z)2q/D]DΉ"cl3M1WRj'O %m޸t! ؑWͪ4nbpDҔޏ֙4u&42dLKO\j-O,ܕvkCFllB#% K4-Z 7Pq>biJ\aM]T5oJ16#v*-]lT/JԐTʦU;WqZb%8edqJ#*ȧ8zI‡ /KW  נFCYe4(G@:"ZqF!DeDSKch3N2N~~]UPK> ,^̲,^̲l}Bi "6 YUbO)FR## RhǹP0TR.ʸd]GVj)|oIݞ\x8Ò4e0>)&ku+Ϯ35,$R{*)+PRUQ 'Sr .LgH-bzq/bzq/ߋ:KTa5c0w:AT:G`HZ4 Akʸ#x'º@t]fc.0ר^ngUj2!ܚUJDQyÎvF<4^3]Zk?~4-*أ\!d=ػk@%5BZx2nײH f?*BaʕxqKciAٺ 21i+W]`&}[]ZAmGk$Q;!kmAQf%PخN7 4N9ZPƷ59em&mX;=,kK}#Om7 ` Tnq|]7JKEf<9:nWKF!]Հ.Rvk5@軿_ҚO&ۮu9/#gyo/~<돻/G)q4>ސ99͜#"/O.қctX2LeFN|/^lzƼK3ҹ*'a9J CtcC<9% j&/K(WV+װX`E:ӶK~L/A%Cot0}K,K3Xt^n􃴻z>N.W&~S~xKᄑ oo?LGnÇo15Q S&!N B՟%sZݯ.|~jn>H>? h^.GNoྸR N>^eYHl0xH )^<͓hzhM`/@Ϳy(Tħ &~B5@ ouFȈ9g2:9y֚^!Se*}DqZ3MkjW\[x$߶A6#LZ)CMb4eɥV0>)o8`G08a(]d!5wm=n4b%Ib;?nmˢ$RLw"[#UŪ/gO{,rOiFW+j:_yW D6e %6 A1%q9Ua5-0[S.#X[MBͽ75aIJ͠j4$T܌B`jPJ&0jA^?>,F2WJp{MwFzŽ[/)du%] =iB"=iB>\ ɂkN6FXkoӋh~|4PXA Wc̐ Jm 0mk8JP9E/RŸu0g_v]KKbIJKD [$zp<A/%y+Isdm;IH.FF.d{+F ap1&\ )l๷@fd/.l8U7sUnXcDo&+F$'1[& ^2OC͝;m*)>U\U/,b:Y3F~@눴 */z1nJa޺\z[,rG䱎x!P3{E7'nu[e}qrCA]D AwUӷ>ZAv9,"yvB0|u^Äy'M/uh"*CEh^BQ)"m{SgK;&ry?3E6㇭?٭ވ[I>Wtl.f bڮK4 9we:oq% uY/cݣ" Non(Ej\\| +1jKXturba r\1tDlhܔȺ$Ib*0/E/":^`0[xIL2cьp31Hul$,ju,먑lP}7pԔ"yu|rOoB6UAWU!8 ` @f @ iq8 BQ(4Wv\#Gz_wǪ*oDw#Vz1ž0y2o2e\z0䈍}Vhڲ^)'ycVOa1$}10Cs_Ma9u}:{H,ݳOΓdfDwʱ5=N Nv='^>&+1Zc#Ԟkw토_G)gQ@%8ءgR 'ϞMtW;\ te{ٞRxqsm?˻93W}[u;ٛM/g2 kchmAz! <2I4(f.<*~Fv,p>9#0NB#-QJ(VYSA\oehL w+O#X Hj0/KESZtl/vRwl-5i0MIH)-ul/!W[$l-0i 7 %_J~6*/*4*$P?a5dJ[Jh?zH'x,uooܥ\v[ԋw7f[io/{mc-kl(!xpJvua{H1I)+$ TKDѽ=>t"w%?QJ+jeu"*ZIOël:HGJ:9-09a] +}}|՞ 9rU|r}aH7VJkڱ2%k˳9b%Krn̆%c5d 3Hv /Kvȴ/CRLEgHE`o C4. Zqt3p@][91L~8YxY\t??7S3:EJr@FrO)~$,⢍/W0Vmp.u$~G8;΍{=k$@)7v̪GP[EjLu>з]t{Q\طnJA&9ż{``¥ {vabp1nt|j;ghOx;:(s{ FYk˺9Z˂ᔲNg^QLj2 ZK*mwh/%sr V9*5e{-U܌uG`K1c'I 5PdrC(o3lkK!{z+܅[AT$A` pNNxw Gųfae+~[,9粏oЧN @RZRڛ3$݊t{Yv!N*E&%ywgk|7syqFEN@J9;,Z@lvN$kl{r[(@4+}{0\ )2Sd%9̰af1C2^) y Z(>{<(υ d(yŎ2ʨqB@%2x#A=!/9HOl# o*4 >}ID %) a gqS,,ͅ1shYTrnـiG 2+q1/@ ;? pU~4#'%!AtXDΠOY,Li)|kM\?|wqHI[%v{$$ &oh~k~r❌jxb9“^φxts { j"H Hݺq]_,% 8XʑQ2Ҕ M'G@$jo<)Ops$U0 ($&x9$4ֱ!09*tbIRބ!O)/8~%1("#ޠEXZ2TD *+f&ȔP1I8: C$dP]cҏ410 (LnDFMVxGS Rw8ZCC^W #ݡqA%}y *Rǐ8iap{b H=2i9Ḥ[*E!/8HQ킔!WJ9!Iogz^q2(EX'<إn+G9^{۬'PSXUK)$#V{簄+>eg/<MI㐟DW !X,?%! ÈmGJ / xxLHRa@9-j8@dՓ.n#vwXF?ۉ'&GNym@mX+eTZύBp|wK1|>h3d_u=a&8܁rٰЈr(ۙ1'+ ǥo0Y_[n H9SL=?H3E"H)Y|N2%PrJp[$$ dRš0imaJ-F;b`-+t|L)JuħO?g^_FgŞKev%0*I8G`q4;j BSHL A0jQ6$D+~v=@LP twqU00kC/&z:VLcvFO+3v&JS%ܐɠt(@pd ]$xd~ZpW̫BBgnnf|zvlzqgWno3u:zjuզZߧ!\bonIAF)@dxXd"o?8G:غ_p{oֆWŭ/aюRAJ`͑JB" !/~ Xg)i+uc0sѶӄ(,TJLz3ք XzVqK/1 ,B;QWB悥 QAA,Ҥx{MRj]1MAsIU6h\P$VL(N;0BwMyn\t'vʔRIWj+6M8G**[p`HPe 59PjjW,J+ہ&n)%PSp3[{\ cU/0BQ㰖LʵtoT(YdtK 8)LS=q1m aY4ȧvc+aZmyq,W>a_3 ɚSB+J#L /DjxȐKt*ʕGvSt??IJ(y$ՙtq;q8@[ݩ3t['NOphJ?_^||l_| ,?r|ߌ8]E*:NWu n>eo:ic,9x|1 LUPWCιS,Fxǻw>hQ,>F7vu"Q?m2 LbOj_fVv?x"5+\eÞ^^Wz3JS>6`$ZZQ+-"C@b>D9Cz7ӍN."G͖ fDgڒ6^mZIxa+< P&pҘ%s*G(ܓ hCuoz!.ìJ0p$z*,  8A`(;+ yGҲ"=l˜rK+ƸG)bFB(NCL Bb_EDTˍBLCp<!y0b+|U! iXpl+0kǨF4* F)b5ҤV3EaL+l $`g 1;jkw=hģ7Wj6ͼiJ%FhQJo9`O!QbI}˅/Cn `QF7F#0_D4, a$PQXM[)dP4gR @n3lJR69?/Om—r2ɫn*. sɂGʐIYޭ0.7i.I5'IhH."8Ӓ;&7Ũ@ )e() ܘQŹE[T:SJhhb'̿Z^zwX BxzU-&LsagsQ Dx;k11=PsX& ~Hy^3i7,a  \M+ھjPȣvJJNyt\Lݗ?N$|!Dy\R?|w O h**Ο}$)JPx0%r>~IR<=8]7Ps6pQ1@x77_fշ绥?wk^,{QNdlMB{`EV;B]c; ;yYv➅IO > ;,Ls¤'*+.4"M|,dH ̣mNDо/WG.d<'F!hDV"уf2QD^Hz澙#pT1! &r1&uj+h9`BNr5Q*CrjyL:NG$_ i<\Iߤ)!NK5 .޳0iIPQʆیdx=zpf6q Xqhw+уkf2Q١~r:Y<*FvBl4"0ky[?=9_+$ڥe6}#6kΠgaz#.ΧVzKzKhГ 1P3ҽB ILlKhМiHtLQ %ً({[V+ެx1񪊈>ǀ#hA, >bncr&ktRǧw~%9=f2apYxߺVZ@ևʞ]>3V#>J۶b_o'm-rDс9j{{KT={v'Ǹԉ.ދ XB=)RWp& zrdQLh)U5yաwlc0ԫɞ](RJqSIǨxv~Ӏٳ˙e sq-(ŷDy[_>HgjHYU;t8{v'8җ+ B_@5}aX ˶\O992['02:xqW"EdeWmeQa-9Ze=+Ϟ wP$娔k҆\)VJq:istpx7 OyG Ṑ:~NB{l"Њ wa͇I$εN_D/hpI(}qlJ4 GBShN4"0FxiF}+[?m&]aL0Y`R3 Mx7'i9&ə9 X`FU^V6Q(gjK1(md"t W`I 's͸1ɂxVyjya5KB*fOo8;l)&҆ZД y#<–X1p .(rӸK9zzu==1{=%ku3_4 66㜑 `=Y 7B.w\/?~m(ר Y&,OD2>gu^O")TtȃN WG}[8Ls (,8dK5YŜz(Gi"4j -UC7KsieL:DzL ס\»_~b&}8dYg-U|#23I@)C^Y./7L5vfla^REZp&oGo6IS7dnUgUKv#ڀ#]I;^3\?4']VA9ֶni;Cf!BM ( syV$sL+aX\&lqcÿcc\*7뙼#hd\Ԕtwucf:psٿ[K(+߯'^{|0ˋ3QZ'2y'c_uJU¬Dfwb10u4V61n,:TXTZEi̫N4dQ'.Y;DS]-wK׳۶Y8w) =k( pkH\{t 5eWR!dHSi`QoݜE"w-i* ,$j]7FuB լ+az .-rm1'.+`R8]5!jr;Z嬁cO [ApzjaN˫rs;z%b9-[,0.82\E2;nt'] oǡ6Ɍ-]TMGSooo%|x<[jCOO5ztg-oR!X!⭂|ƓwQތnm<5~ۋŧD  :%g+.|ŧYC4^͢ Kc]4ҨC))A7uϧk<+1G_O 9n^];ݢ*D\|>>:%X)Tha*H=ml]3paq@S=:IBEㄖCsxx񠋢뵠:-q@}g˩.nw^}χ.cЭzuj~?ی zW@qwc!G:vEǼ|E,CDFl \x*;ʰjTe۶"k 7('nfݕߎ9St0\'iݽs$9܃|83^r^ pCM;}v/yf%ֽdMeyͨa4=y+mHˬ1SRG1 3/hKFԶERTJ(Uwn&+㶾\ͪ Ap= +>91n~-tp} !pvO:u`7]6\ g pJfh7p0c\-K ̃?nR-ב 6pWnpF(ˍ'8_b"΅p^` ê J!$U.Hd'/3Rr@4'ŷ -|> X"Ze> ]hGxgw7VSR6X2ZOt aye 3}$tEJ|Ҷ"Yk/W:<')Ԍq$bBpO *4чNJuCėĕ]8屵p>? $V4qTLWFx!fݥt|Dp*#ksR):Z^UqBU\6ȌjٻS-Y̼'R+DL=EOkC6%d %4UҮ:pYsg + MϮX ޙh{,fQcpy3eifgyf(s 'Y`JD+h)U_m³kmn Z4c\mLOHA|LE)h8PUV-ĉGGpL\V}g|kq]FN6HO3,haׅSH  s<2_K|jX#%uYxw"4 5guvq~GlH}z}~wp{mIaQ#pq'er9.p֋Zo٣ 0S۝Wa׷ ?;PÜ. `B -֑F"WX<GL"yk :lQ4tIMȄe+&;Ģn^'%.c?W<KOILiH#Z  xoz06Ȣ$vTJj_n.$[o*1E- au5ueQfؖ)jUCݜ3L{SPBs%S4sn_bR]R6ǯY"}%HS)EEԱ<^}icRi"WDΔ p_5"Y=e}%{G1^80h!{6Jhܛ;q;-n`_|Wp а>C=t* z=D*~Nl"@}eE?h@;0Q:eɩ֙906,XJch' %ba}S'+ T0ha% [dTp35\:,y^E6tONJC#bnˮ΃˱^ѳBj޷[rbS#bF9Ot*/ s;u2͹L~ORz=5zW1Hl.bۂT*);E~´uk,QN*JF5`G _ mBJ(4Sek}dUf7{mQey'@Z;sp HUkӹe0I/HC/XaeED S46QV, RXщͭFK) 9r wa.Pޣ ;igCfq_WBχAVnVK&{'lvwQ=WC#.PkZow] ;L9pJڄ` l8\$BSM[k B\ΰu9gxDM|޷DW^T8yKM]%FB 1\%n+dӑt `?L6ySP?[1G rx8Exx>}v%ETqs= =[iqE5ZD~΀̦3L@H,mPOZVWF!LfF|:pBJhYpy $*$r4'( s򷓟~ 07X~\Oe&݇aUB9a #S*n-+ 4[Ʌ2*LC)-hwm\ы). hԅUX!h^Dn8vr˘)5u.RtX9AFā^~cp+A;,`r}de"qi,xZ) pw 1fj/SAmA޿ȃ>wm&"5Bf|&`/3s_~@pVNZ}:gat",Y'yٿr^o9oY>㛂a6,JFLHx GLЖ{#s#,PԞD d}~ca}Ǔ{\}OG 1[f`QQ_f9GPxv9֓.TJsc9yY,gv^-gV(Jj@Q(1ɘ )\t&GۉZGwWoJ}u.܂@0O4قJuydI1Qx:e@7XuO xz?>}LԞP~* y3*0[Z11ƈ>5 2@ pRϾ#jc☌Oe?wN!(zt)z$"/P(9*`+DBFBҵMㄭ6"tTy\JӸRO 3\SJ#!ZMH.DXdDX"mT@b-A(Dى/!:_7Ob- 8<HXtM{ĥװPkƣEQm+vh[Ը~TLTmDl>K`x3F:υ :R)5u" J!\<]g Q  yb[)Xs2|C)K6t. \E__Sfpfqsw{9Tˁ#?sz-<O؞->+<ɢ:[ν&J5<6qd"=gEB4Gp[ryIC9ӱj@vcZ/~wՒzddoȤ 2: ߎ>年}=N5Wv5Jk}o.R$ݟ;9 8">5gkdbuYW|KIlު /|IY _k6}2pdD>-,-=Xp3">va}~?{WܶEF^;}ь?n&'t<f%)'N&Z+)ZjM1EA9ŽE*}T32d|; "p5_DYIO׭-Nnxښm$nx^"ݽF"RFf[ʇ5F "&B`z~z>Ђ3=9 AAL-Mh` s=[9 ) l?ߌ6$ejr[L) d7jzO?AD2K}q 7 MjJ{}54<~wi䕙֎A?l`lOus7m3kM$mH6XY oaHhjtzl?a&ұgG0Z o~b8_˿)mO(hՊ1s2SDQrdAlv86R-¡gIJۥM,o^^nXw;׺Kم܂{hLH@(S \yC ,=Z/QVZ+fZiHn!(D͂w&AR\K\~- $eH8N aǫ0 N@k}4NK-?T )G4zV;oݪ}8QσF*kV.C~Y­^}6ZV~ Ǖ- Uk [@;Mekņ+ oܕVJcfhʾ\R-%wG msh) fb=pQU.6 Ns d6"N$![I0qm @C`.l ۳-8DLyjƥe(xܱM8Dl2G'&;ɐ;\{!%rVf;(i*4s} ȟ;S2CjzuR;xj0š8cR pKK8+Z9E]2<`q\^u H a(G i9Z c<E[.YCJx8֖c?1O\ 0˅!:ׂ廬Mif֘_uӂF? kSaNYюN;s:|Yt-t+/Zat(A'ԗD6v޸ Z,xIRbg;{$8~ y/j! J(ZcH*PB;Ǫ&OְRbc,E GC!Z!Gz 3۳GFԥ@";R`sBd =`NĻ~+.n"h;TIE=>R)-BNfPC`1tc'sTY=bq G &KhY>P*p5H ؝F ZhA&S0{| yĺS@҂@R̅P|䒳|/@ČwrBAk߫,4H/Mnf UQt֌B\##gWs]OƳˬQK5ǟ5O8?N v4MzI?cʬco~vQdGNFz kYKC`k\LUa|t9VIqw^S':v<ZntkڝONǾK'O jDX|sޣ͎ ݾ{{~sv TvnD^9I1W׿ 揻y݉R%GWo߃ J] |wF|SDų_uk;fޓNe >~_Z'F\:OY7jO?n&#v}vi^~ 5Cof>e23BD݉qE?Ja4Mxt4wV~C미˗ަw[B>=鸸 *qOO.Sy#;uB=M:)-}g\o*k>>,pXҏ2|?7ݱ0u}r;٧w*MbZg{4|g3o؛~'po_ZT=H$iDfԍoF??};xrkz}Q& uW^U|kkDEYN} Ƴ N9i55O{);c >4SCsT yCV휩N<"3TmY00ǎ@?U˼˽O4<1#Sx_޾|Aר2Uc).f.6޴ dwuch=u6W[G (=,'%B{jD "5}khau4|m_Gya7Gf,Y,wۋX N 8X;4b >kqJfQPeTAN,ջȭrK!"ɥOE^ߝ_Vk~Ko j6g]jz\p/Op ('3v@o} trqN Gݧh8od_&ϛ_]5֞;=5}>)4}ͭ#ԉP ,F=XHXM2t^}ϭN9qKX7\r&䚈,8O521]=T`WT=]?+ kE_k^"m9(VFUn O,mKFZ1 wTT nj{ V"~0k;FyF>GuBNa}#kEϖ({(A!=8#*zNb) 56uG)քg.!Jsl7g.厵pIp>*VѭКnqk ޱ#@s g);$]IQ$**7#l8Dqw?ozi9jܤ>&e4.ak֌ gA=KYT1yq1<^rRQx<H/}B\!=)H] I`H: xBbcOI uGmzě !.yQH ~G5z*n;&MmCmbN}&]E9aAHqA S;V߈p 1Zu4Mw8I{ o[so 7ds!QM:O۸;ǮyW=VPE"DV`l5Vp(ؒJ]/ֺ3P躅ڍZꎞ4uK֧qAspCEnm(ݜ͝!r_7N֌M)U$ׅ!@W7oUҰ8}'R!r뙻"FHwAIg*/g0\&GOyV0%sXVzjCh٥E}n}ѱ@@|WHE< GP+`);ăť- m}H?{ƍar-Y$ ".mn-dl6OWڌ0#dxwӽ$Y2\ !wޭ)NFi+7UYS|+u"}/Egs8>~&nnRv/Qbu5qJJݯ0Wӡ_cm %1]>E{=MyO՝|otܗJ;}K 3RO 5Px N RI/66'lM.Sh!&x7+F3Tm{7PyQ8PJvx;p .Ɵo6CTDH],Lr5X VZN (61O;Xb_D9b.;a h%xT!0n")i<6b%)a%3:pRi(,$[+(ekИ"$g+HKk\P^HCR@4Mz>G鑌vwdm< ]}?u6-J5ysuH]YfְPg,|(żz8?ad3?P2dT qZr_F۾\UŊ75Iw C4!tq?Z3QfgrY)_Ÿ' w#W,\bUvVM^ lf6e}?=x Fۄ%H775`2R$!;Oi> ^/H W5Aێ^25t Lq3m@ټ},u:~Fu/Va=R5ܴZ6_Kܶ&21Yk*uŋ(|*0 Xe:AM`X3 P'dĴs 'U$Df:J6hyDU RĉrV E䈟Fh@6n T CLZJ{`#87Z8/\PZC$7EG4:RJQ衇Ol!T7p1qu(sN{,Q I##! |况a5E@'pB!-c"6ԁ2<69 4 y+Dr[ MRs GqLYI4a.APKxm4 ʏv,Z^ %')N4h12a:):YAZ6WC+k[iۚ{)Eh;3phINw뀎G+()PmÕt~osu_b;y>]K-.[ P|o~<$Ђkj57 җ8чLapS.-^K ~\7&R .bh5π?oH}xƶTbMio~ f17]㡒UދÆɥcUޚ/~!:*ΜބT-qH^/@uA6DrZHM=K]k w 35FaLY?ɧ}5nJ-7S9T)8nσ5 ?D;ij=h&q'GD^/o7Q5ۻw6;P`񴿍¼অDфk{t[U ^nuj&)^;e0RYȊCb>ᚱ|$Dn Q*{Qh13(ʙ}rXEkk45 vFPDzcf~UG%m /SPJ p ,6uٟ!dPz5^֪%]O\#.'{/R~rd3)AR=cC$> EjjI;H ] f8<3$D"JRBc{wCs![p"ON)ORoi,j IUTpw:H';%ύ5F䛲[{Q#G lBBq5qN#AGz_w5K~Z_8:/\ `ß*3˄seՑL&vJZz ȡ3L@B+JRh$MRPCgkoU"Ay. 7U7{Oǩ++4@Pq* ꇭӭt~t ;c. 5!1v_L-GD&%,(hm8L3GJP@QuF$k!kކ$+lz;<|#ܹ>j`4>-)}zon??oW^Z٢mhomt[Ku8gw"YUﭻ9j^:0/i>瓼0 ;{7o5SKmJ}0Z77\>ٝp{%f a+aրۑ!lj#5 >8 F2es,#,gb$&O lkOkR#ϛޝAJ AB ۅGaFVP)͍b GJ$W"G΀<(y34ĶB@A$Vی̢CﶞGhB63#3; ()Mu\Tiv0P|NQ_XR@@L"oAޡm)3Q *wɁ+ kpmAИ"$]JoeMD+р&1JHBx4|!פb/Pq#Vѹ]>>b}ǚAU,I|Lbl =tS?k3uȳTWmyA0y?j0^Ѥ3j5(P>rC{ݴcE^<Jta<#G{a:<>>}AD.(ԅ!h6K>l&z^qZӄ8=?W#tOCxnDP+N*Q޸AiqtFS?;l2IK6[T:].j즪2tvI v9 Z #2Sgk0BjSrd.ܸ]ҹ2R;{FYm$c̪酾bty?W/K6q~wb~u@Fz:=xZ= bΤD_GOc+V0F({<11HG$ k$h; T#{XF d[3N.Z_7}hl9Nq܃M u5h6foŦR!P{v(WTv%%U0rkVrnG]ͨS#/qB,$V H'3x`o=5c Kf [·Q !"aaek: _R<˵XقSg Q(HźĂG'jjnhDMɪ g;Q #X#3d=eY՟,v.Y%J?a > {s$j KH^%Tϟ'5{,֣Lj3c b)ӿl샥l;o_&׳lq8D>BɣO^>%O8\gv2ZvvKTԺ&"r.ơ0c ȢB?q^WI~>%*[c=&{-sp \APH˥cH_ɋq6?9XH4݉yN[ח^|iE@ )k+uTlocJ+|=9UmR-[򪲴z܊0fB$t"dF5Z9x;N( sclKe [Q.~rO5Zocqtd}>;?Od \rd&2,+JcX*j X x)k?28_Tވ~Ȃy; 9#t lY?8"-'=Xml/&rQHv$hg2#]ٲ7o.]Y[Ɲś#1 usQ+0uH<y=sİ?jQs}zT'۟_}>:ҢRd|xj^d/z9PQ[e߄6ew?_ݻZߌHCeJ"r F2o!I`[zX˒xKٝݹ9M14ist"hR~Fd>21n9ĨIHyp^X8T4lc7ǘIQ'ID*v qRIlgH8vCx9/lY /.bi}4o#I8 +Cb'>s"F$[tax=t#X;y{5{߅ڨCP?]N}W:3׮ NW &>MHA>'"om+INg;$o渒pEJ<#(H@ᩍ}DYk˜f)IFCS U`AYo-ZB$7@c[7,Xx\уl*"s@aU6_n΢ 3Ϧb'<9w<, oo^%?n߻FqՇ 8(mL,f:puҠB4R6 @ytw*ͰSVZUch-@-xlqJ`02d)QVVD[Sfc;9̠=pI ir& CofeydGZ|V pV_PFT`GI=ʽeVhVzg{ cK) Q@TYb5AH7Lεp{ q2 +c' mc1~cGd&F{Yhg'! ̘8! ,eC@18G F8+:YZd@%bcԓa[굉6X<Y$=\IZ`E;۲2A}ؽ8@gu[Zk'h8ju[vorĵP.$8FiZXr!}0*:{ƚ!D!)ʝ#L{H-ie3㛪m'H*Cݖ?RNv\erm캪-5ZWpU[Z[֪{fRfA}ad>@|\"*1}sސmkF,qhC_Io6KN Ya>y gҡ}ɋQq5T2hN\>)K۳UHj1,il1,~]` p?<̩S< }N _oҏ8FrH{i̇(o64EqzZ8]-LEǣ9"u~v5ύ|6/1+9}򫼰gw\Z5) =-:@ƿlID#YW/;z uvDf  /mj_p>9w0d xgWh-5U ;%KFdAJ;˜7 r︱6WAjWl!R N)AiG4[e% uœ  Vsq_{דj=ufq(ѫ%88HB Gz1:5;7}#+*$GwIEis"$;7^JD喤Mc5 ʡz̹rw7&bDigd2Hl7^,O˱e'm?S+_gi%7$0I>-$?'d3МgT@.or!ԙWťٚYޅaA,d/ Oy:p%g;[Uugh9?N_d*7 n;vqza@ȫ@9Yc.Ɨ˱% ɖw cN181tMN11&݆w{-hD]@nBvkyhྍApN{Oh+^;,.!5Q (CQݏʫyyϋS٬fPC P +vv:ck_}L:ڪcJ%#$-gp5M~_>z +⃈*q0y hCL81.1V8 JF '1wn-I СEh?eFۼrY6z{U&eC>fhP9B9lhYTmW¡%;u7M `1g.;u [Ow!B9bGgLm*X5ܰEj]2)l}~߻}̎Xƛmmd'.Nံ; !a -.7-OY[&08Z!}\!x)%4nSati8Vbnܻ&&g#FqT+NL qO1q(4Xj];"!p^!q#.! h܏7 ?;_Ds b%!FP@8X Y>IAcSdA qq-dU 8`N)`Q].Ǿ0I}'WƬ(B!d#M~)J(<, hIa&aQxP($m/2-,nd甀b9ʵt4b7 --Yzrzg~9/;8GzY#"^R*~q7Ƨ"!cܯ䟾Ǘ]LN"@XQS67~(]Mg$T_w?ڋ o{tqo * _\OgYpoz^eaS8?ɃKAt}y]mo7+Z&Y7Ĺ`d?fۓȒN3]HZ_GӀ`Y]Y/d)zVzuK F /WڇJPLql3׬x˧a7@ɎMlM1Z ! E7NV=hRZl.__~ړip4/ /6|=7R`f|`Y6FMWŽ9k^Օ굏)QՆn齖Yz,YΦZ1|m.["6k>;8vRR NXð/փ|-&TCb1I9wEԫN:մ?X]h$!S,vV_\ (F \ߞ1!gY?,EcAܤ\@zXvY8-Z!-CȽFT~_ҳX=nb--VA]Ӱ[+=[l ra (T۫C}X7A_^G$A M7APbPv>[F^' ^J d*}G1mE=.(s_e~>6Xs6e[̚NO'W>ţojFH֫6r'ǶR[1 ˺6*z_ʫT%E__)t٬zBRxm+<9D4JcK5+4gEcoÉ(m{VLգMW~uZ ~ZSsi].hnQn~7/Wb8(/\{SUߛ owݽP SF)r9* *2Z>l/جQ~(ˣ!,tƊىi P*WXc1*bBtC37+ޘ*+2FX JyaaW/V͆1zCgxYf]m!.ks/.E.zHP3f+FI62O5̺02EF۲,\6۲CDE5 FAzNJAs_ }_rCǜaC^(z@ҜV? grk@+~m ҔYp| _ cE^rhhӄ|X-_rnܬ1~Bɶ*pٓ' |Z}v^P!{[әp/xD1F7)YOvAxex|tllI&z[ m&NDZ.(~,gIJ H6y[\(Dh-Xx$ƁVceV[In :3e( dE /Em%lD .\TiM䇭>#PBbƖ0&J3 66sћBu*zzP(`QJeI%82?QE*脷sdtД \)p_F}am'/uKR,&Kkm_ؾ˯.꿳E ن mR9[{HmR!=*Ҷ0 GK׸G+9W;!}4|1/R_VTexk]7y^H5(5JhxBw¨L^\HRruk0 >Yd7Fp*rsBѳKM i)=HګSQ)PSC P"m_7^@Õ3-iPC,6ZB~LLP]Qn`†RJFXΰ3:M)qe tY3V:P.%GM84x5 Kcԗ2Wj ۫ 6VWe74}V1~ TJ0^SB!۞_;nI,Kmjm0Zrɘd@MP \kԆ=RNj]c eL0:\\ٙSJvf*{=,`p&Yy0%W:ߓ3wqRM<xp'qzonޝ^?ۓ$`l~1 ៅ+Rb]空y>d~>+9TL}:UNJƏ:I H&Fd~ cK(j Piϋ91>̎ưM5(OV^s'w2v"r,$iǝø S"X,FK#(Ќ);Y"}k v"gJi!&V-7[?1խ;z4o{T[G֡ ZjlTPD~;SV\Am8h-:?!?u~Jt^DwIz>{z?.m%4XA r̙Owo>73~7f؛'kUc/>X|6bqT/6i".Fyvvy?ErK&tծ {Š^IIB;a$5}.LysqLa{ݩ?L,0>"V?ݚ/i ($ʒ*G_@(J+ P`"٢;B|XЦǟMyf5bDȈeV!\^Ao28tx3ԅ &cWvYC\I:=MVs88(hvx7IR|xMb뽏jc4tw~wv%c*%"ߞ.d蓻t?n) a@c[oҷ;-;YN(}i;OA(SU2O7AuQa<%,{\gVB|iayrw77Tb^?i;ۓD};?A/noRU\F{12[|<ynKeA4\yg!{ŭ2SFyróJZ+{6zO'JKw{vot4w<ӷ_UUUUQW}l$}} di~ULSݖ5Tw"bȒue|`9I!.Gڊ4+'c8A 7WhxES|150njYT =hsVgwGR8gAx\PԎъNT蕩DwV ?`ݩR1e!s+`&( , A,S6RNJNm#Q<>dMt(R`|Bxޝ6ӺUJR( R=؈&d׆[Z:xY!hWňձK`ڀ 7ެ[YwY1!u~\K)m_p9 +Vv 4CZ e 4 lU߱6*PZw7ݷg-Ge3>[+tCR݊LB|,R2*HiTrUeL *@h=i˼b{T}9]9cF\b*8mtBejy;CidyR}'BFnOo!}9)Q䕬וĒSüwQԗkT9<<\YRe|,-d*f!%,ȣr2=J l AA63*0'G܊W\0_*8VW%b)$̅"V*J3Х_CtH>Xu[α-*.eE1ⶨ(Ilw%uEU55>TAY4ZԺ0s/'NKay8R* '>H2yCP}樴u%vIcv{16dn%>P5EЍ%]I\gJoM,PJ(K9Fk Q8_ZxMQ0^S*ɑJxD}ܠ}0rB#c|Q+O!yaBX MA!0  mJ %h%B DP#hR*gcY-eЦ_!0dҊ-dOy, !,o,$r/냱n'tUBI=l&7TɌ_vc4h"ƈ LDI gn4gK;r3B6rq1/Aw_:V\? -ק x^9Lׁ3|F$ c |^"uO?t \!wAJnzl4GB2P96˜MClGst'GCs| F@tbS6 }uRpW"# 輋4yCo|2GZ.@t Ô5(>fݽ (i;e Vb<JL&持Q0Z|٥pM! {H߈VBh)=k(@W_A5w:1 tyeO*-+5gJ*((BU:ed)Y\%h=ɘ/G# rJ*(֊Y)'w@beQ@8hF)~ۗy++&\1BLWj8G= %U-$P{mH+6r,n"_k,VyYZUKy4GoǍ[I/ؖYs/>Şe_l69VĎ-b.5tl *A|gF 07kK ǢA\ MƖ|lQk(WV $P)r=t=ʙQiOJNzb)gc%37$Y37%aPb,8cc"\e4cc?ۺ7B,4c^ddkk;p[ˋ׃wpK[i+n ԾLTlT'jV. Zɭ98L7pzdjVvDkcXg0h6%~Ll)OLb%.6+&˒,ܒo^Xn:In/^2#:^g̙gxi`ٌKI>3iҁV/^ $Z~5?iC1أmZwWll 1A7VT&ߣ05`|tU 惟%U~s? vsw72OB0+ƻkf 1Aq*qUn6>4/y|~^HlM^/זB ,6\ɤa 3)n?ɸUV>PFۨ̕ > WU.cȐ3:*JuyR6,W$ _GkvMqK⊨|?ydUzD1zw67nt1Ԃ[t,[&]Oe72Ȇ^@u l SPEݻ)Y 9 yw'uܰae^,jէA ZqնB{,u]p? uti:p{P0mӏj ޏJW&/h˛\J <񮔕Íw1o@̋l`{o]OO xͪ-ّ `]ƳY|D\ɝv i=;};ApNZHA)]<;@\06{XDAky;Qa<}-S(OrM?vzT-•lKk\hwUѦ$S“T@%>uUjd IM)DF Cí xi2ZW -zpdu^I.Oe+q bn(4q|F)]*h؞YH d;I2bۺ*ԗV@Z/HP~20|~ x͵F_!#;?,D ˴)!"("TZAZ Mʌ^14ViUJ)$3.v(.fFTJ<ݤ^Lq3uC{X\W|wM_ jb}2o B Z,#8EpVqsDesd$cܦV.3P &ؔ(`|x`-t]K[J!vK[XD+c)Z:$~䴍{֨y{XS  26%0Xj /.F{sRkFE;]@4.~Ppֿ>d,6!yA2.ڇd dLO1ܸ r')dg8(DSypPwbfF 2<[%ST&(WDrS4ȄqA*ud!^N4 cKװ}>PfwP-5|I\ywwoo!Vo<M'k.QMjG>yw|G~f˛}7fաl!@>o MSh//J، 9OaLf]aBc,ȏiGoAXPV%7~2EhMqKM[0h)o9N0I>#\yxΤum! w45|bDLO{P4ʧ&r;m!*㊺.AZ4IB2h?0Goͺ[ľE/;~|rttW\ iJTBX=;׆ktF$3C(W-C| ehϾ s.;w#ۖKwe>G5t8#gge>P =|=5ҙ+rبgkOe8]p|V(A\2Qr`O6IpRP:ԇgkm3ORV'Uc٠ &v*>;6 i=M6*a6m$_Ay.bi DL&!6DS%q ; '. psk2J N!# ̜@^]S18)alyؕ@/ B^$L!r0#[,{vUp/c،B齡em4Q -X 4#ΝL-%PVL "%yVٕ F]TR uT $=%2vJת!2\=ŝ2 _~ŶRFA> h\#UУ2hcsBK>J b DuRWUAKe#DKR|yUEis2@SƉLFq$e qgY I\J'Y++#SsPDIUHZ21K+&DY)x`>¥2KYKT ض550$"4,^݉[Pbzp? Cx! [`?&oo38kBIZɸ֛j۽Nc'%7P͎'ͻC4ٱ hymoq7\[0 ڷ`M#8PZx wI-Rh}-EVnR'[ d5w%YwɣI%u AEp/|rGpRC)t=q ѱ G&^D%[C.G=JL+<9w{b)tz` gER-G=Y^06Չ *tu~~R#%s APA0&iUkbU:zK5aa'UlUYFGxi-qgs7PfXyF>95L-u'Q }=*TmJh;|f>aL V.Hs Qlt $(XgAŅpѝ:|k{c/:G7dݧ@ Ś|'a+,~gWt^M?u;m|6>N6Gpm"km#I /r%/8.^ػYC;Iԑ{4|I=3ba͞_TWui Z /Ft6'5SDzlZ )ri=跗ż ٥z92![݂P= YUϜ>0/~xBq>&NHhd*$'2/{ݲzDF|}Lۛ|q{՛?[<(1D4S"N~jӠRNfً/frf_l"E /6>+Yl ȢzfUdDڹHEv.sQv9DJNV3i4JR8bXacԨ:[5>H5>3EF7&Xb?OœK70qY)cJ9iAB ^JQscE@aE z` Kݹĭu6Z9y9hy()o`":x-Gxz:ÕD Ca(qJ!Z$-rH*Wpkj`*ҁ n0s ĥ(8A\SiDQ> OF1hB5(mIy>gB@MId+946z8B|G(~f.PL6G%;g&v (Z`t5lKh1cfhDE: FE ՂKg(Z‘3J"gxB`Ha=[/$3 /B`m(RXk 9E=*AN -*]F':[KT_Oro&|Y}~-bp=u|x AK tTj|>FHXs- zl#$6"隫]L{Wh6xwQ?אT, f5i+dѿm'j1 VjJN}q9ٿMd"t`ٽMk. Sbw`=<#t^ޥ*ez#E@OVӭWZL6HS-%5-SJKtyß'de螁&b4ߏic]c xZ˧N3a~N)߰=+G.6v8Sn%Gu ,S3`ENe€ 1vN%S Xc9PT}$2Ko;=`r_!bٞfAzFIM`uJc`e2aaB8FGoAI6%j`2Tյ2Nj@Hg QᇠV|tfRD dGBdp KHc$-Mh-5 5UaN+%2rÖsN4 x`Ҕ)_80c?axK7A(xQQu=YZmF[Pq?TuuV;oWTG]v&jiV*atfz9<ڪ"0. f}QR2lUz}=ǟ?,٧+ؚOBVp~rV2 s9"BKk9Z0zř 891.\Xs*vm.S3dL3:\x`BѾ/?4kj>'Rw[H=Ͳ:x4}W0E&u.^VwB}N~sN >ݿ}*l+pS􁺳[Q*tbEP %dGLX-҄T+$񓝺:l%"*+zDtJ4`- ƄZ˙`nNfYXeiJR"2 9Q:t[&QYxL/j@oVgX}o^,&y!Rf?ƉFZ\UbM,uZTb73~s6q.V؟s%%Ƙ˽Bktv1 kLXj۾3F)v͊Q<{pN@%)4AQm3Ol=[S .%IVC|U( fvcYPNJ$Zz*9[S-V Bր,"ƺ1ݥT܃bex'cܸ<$3p pnܺk2}|dN|X:^|lw_3- x[~Vg>(}ƚ򴠗ir?#pj=eUZeey!.a3ޙ `eQ/E)0ªʵ_ߑ|"Z"Srs[ŮvOڭ)}Fv"\ľ[ AvBBq-)Fy=onMW˩ 贜w'9? ߯oes.qԈaWѹqݞwr_koQ {Sc)kɺhg- #3CJ>2=bǤwoӎ+= :Q"yzb3 V1%ůз _|<0Rb5_;9)#ϸ;Ιvn7|ܺK s%ޤP [k{v_FZ(f"}FqHs_2[)G (VwfG-ˍWbcmf >fmVO}¼_bF&$F\GBK v"hvbz;ۿ}:;hY$!Q(Y r-狔Z^2|E2hdSp2[.oB7:!|-_N׳*8*-|y SA>Wǿ¿]bDE*FtQ/F-f$ՆΫÔb>^)kVt{meHgU:SzO.M. lz&Luix# #E(oAbY S0hsN=C00,٘6<!4( 9@aRY&aH(tLr95I0 2r iЯ^.BK t)R# Qk!w)Af8gc2JlGIE5AaDʪ Œ{5xaG:F dNT.`Vtp<)i Ο]v]~je@dÅA/7W{]W(n֯"M܋d $; KV}UMMNM)J9uS:O_VFy-p=α3imU_?hԍۭѠ\ BPBz JQ c>>?C8[B}"٩o4A$`HFA_ \c\@_6X#QSo/w4ox 0+?wM!G7r:\NؿV{S?w,K^9b: Nb߷]@1bk]>I"[(/a_ l7/ܖcgbBr/Bu0&LZo-"he |%Ѳb|T˓g 2&HWibOy)^2 meR>}Ef^ 9Zx5˅?ѝTd~Yo\h#rOd垽$P$F<RGD CSbGE!cjDV⨲6XD&P\Xq~08/#kcN-C"Z;E;;|1# 28#$ʼnRa1 L ~p@,HH B EB.hvP~3 /{NdN#'y{&{>b;9ޅ`nO $+Oσ=,6BK B Jgo,d#pOBmʚ_7Q ^]3^%}~ZE2OP?‘ +%X\uq<9op/o'TI#0+1H41bg$-Qnҡc3:.E}F|?cqb$4`1 L}m8(cV&"TkJ*V19W*@ 2ll `\&} l " q p9nwUER;v^WwcN$yeںNٖY _,sR!8l5}5%d Z46:Ge'n~(H[_y u2X,i&v$aVaYP넅tX*^XDJ&6dmgx́` 7`%KAVԦXrxSqmQ5 HMQOA`8Y1(bǔqק;8b%Ħ3qK.l _uxw**[׶- yuחV .ϷD--_:XR/tZha@cJ,|( 1H>NU8fad{ HPhIr9 e,Tke[k,|C,B/Tו`m+< So;ܰ+k Du hx )A%4}"#`$ºQ] ZF|9q6 K$2Hx6 b_kT`&->Xc 13S"~s/02d0Vo𜄙 鲳'Ey˝\ܹ5_*Wm7L+_'L1[j6~x<;a\{V,ϳN *ɱu$x%MCx l\XSruh%c`4.(0Q'a9DMLMR-QZj[jYiqiqƮ32*Xkk/DWqʥf{_cjGGa̛ލ\}wv^|>1e'Ym<d L5^GEXX+|9= ±00ş5Rw~(xDM7 ^ƻZ"*^G^{QQ!v bs|=o씸QϚܧq"6d(:$Y_7݊Fhh#S> ƿp f0 5 sǃ,?gǍﯲA{?|GFЙ<DIQUIT-4o8.I!*,s^ÄOFV_1{΂ʇzdғ 9VfiI.d*wt"Mܮ[Y \D;h=*OsXv+=PWV"LaIȅ TՂL$UK/\0+}t\BP*÷yԶ-GRG(WvۚxKkJàYG6xWdpx@28яeHVxaa*ʹޟCN0ءj0JDcS[V0DP[0oM8!HӸcB>qx5!K1!bf=Z$#k5[A5Ϭ›6dJ@ esv%s\1Z(}}|Q*N oxG!UJ l<^j6=v1N7 /9… ߇--]Ii"(!Ϟl: 8Hc9T>kwB qT+jFξ;f/~I@吠&qhUo[hwa,&b+,\izаoru zw}={] g*dad>I ?޷gC *5nH&#c;FcHk%!$~ $mrScDA3s!NQ(K`BQ!JQˇp,vJsK ]tg5Қ꽦,@%Feʀ/)f8{}a4۱߳' I*9_er6${Tr.oF0ٓ{߷13!,VTm?r'ccXÔܥC1%W=r( Cn;.>Knx0ǟrӑ\p n*gÍ 4- p3A`q/2ӽ*`qiW 0ZzoϸskS z/4߭;s׭{픎7)_hԍhЃG.E- iah$ k#X*f"]7`ZW/< /jbqO1%DǝiG&ll^geء6Um6 kL \a񑏟T,9G2a)WVՏx Il\vX8 :rzxEYV)ixr&Dzfs  L&BZT3B7^!)+lc.tHЊ!>-q\ß0,(@Vqb'c;)fXW*a{&v@k%_p !לci(QnBjC(c6шq4e3QӣLd ?@~HD& T: 5)%}8pC3툙`*Ԧ\xD3 %Y R&]&wwj1^(C*bR/S~Ԗ"S \+ӽÁyF&c*m^\V=ѰPw0 3ҏ@bK/@( QD8&KG:i]cNq]9pc+L6v03DcM2Id.P@]3NrquN޸Sp}gTIeU~ZǪ1zo[+.TB}$K3DpGN'ӹr#TDSɏeB8.Cu82A5noq8JO9֠mXn/$cS&1e }=F"V$;q# ):ܶ%͉!q,Paw:mX'춦9}]O muRǝٟղ_}cqZx&pFsoQ%wKx~Mߡ"?04b\C ȐiǒEv&J3_2B%C blćICByv@G~Ɠ37}[C-SOWyh_ )7^9@ܓaZsZ!tKzvYTYoӐ2s/(ČfX[t^j.F`M݇߯oo..X/|wĂ??\{u^vm̚[6IYݺ뫙0e|2I?6j?fMZqJ{5,L0 xZmq~0n9v]~]Ѻy_\u;ygZn*g/A0Kۦ7Ilpf3*m hvwy羗3 #~lP6 :@aG`nmx\I94 gE;:A0}hsʤn̔^ڢfKZ˲XZ5ZZZnj[ ")fMx[UCx۟|s\ƞո87)"5&-#M:I;.p[r ۬ d APXՈ2&-b c  M1H/h1L(aQX8%njPK5ۉ}n<,M1tmiV8ɔ$iJ f\j}; I۠.,M^> O|l(OῙzy^B,?/~7ypf0EӤ߇uL*/O c n}sst4+Yb:P QQ!rn6 ('pZ]?D}Iwß[cw8U$0U x֠L3LQ8m RYcCyE-P6vq@m;}`Fb.Tm7 +dEfKfWN+κiwm#It{KuUwJf`{4ı[$9ڒ,R"%q YjXjeg$xk3 '϶v0l{6[1nO߹? xv:;].~ ],[t>- >S޺u"ȰMYuy 1:;Z{6g0K~JdVЋ5^gCT?OQ c᥯r!tZ\EҾQRA^[$+*ZAD2Z 2`-|Z-l,Nڷ3 CDP Ucq_&MVXX_^(RGψI6@6U m##)8LcwH/v+#݈M?Z47’ØEM%x1=ھ| 5h2YAEGMvET!S-d)[tS۞Vmπn%_<>!bY$J`eaV[Xm4Y{aU?SQm|`RvLOӴ$<҈%>Q쩾v'dĢձ$,ʘV`u!o15~\g!??n}>6XZrEB8>忁3")C6ccYv39ٰ-1s nS?gv,=΢-͚[&>A}<JA*@hT| 7I`셂/moJNR v!_Vҡ LsD@uЁe/:v F GClɆӊQ6{o\uzzv&YtkK/}ueW¯CdP_lx~=F`e*v#F}j7upҪg !(ؠpK±#M.:u1#j^=5O{7\cz]Ÿe;mxXX kG-x@sgps;F}5 P|0Z)"?>m}̠js:O`Fv`lL @k٘ I%65gEKSXpu?=0=m[?7eW.xo=&P>-CܪZ*s3"4#zExkE7w\j@V}=#jov

+L)TlsBrnlR 0ݡ\_Bě4lY:0"D(6e( UPKbc (E1:;nE|^ z1F)C%b7EJkURżzkY5)GiQYG^.)HM^+u2Z+3kGRk}V2z!zD(4z`(3(H{i$o\@)d\0!VD^+QGS7i!3(فcfO<=^MCk䶾/0ƶR`.ɔHΛI̿P`4.ҲJHcXLb:A@rHä2@8//eWfDQ<G㇧p;[}5ɧWt}zBȢ~?GǬ7G+#!?so3M?sowJEU:l-|ѧӿJHBpK1\%l:kC N%=P.PPVIq0SNAQX::Z:~Dbsm {2M-fİZ6NH.$f:icc9(UQ*h[R`; $$^nj#33Ψ5L`BTz*V19 v&r}܈qmx vs%Z@`_*DGf&3G@:?Ox4_;SaOqN]!̯'H ~sIy'@2λq${1f0*2Nv}wѢƀ>oqW{,R:3+˕ &'!VXhlj(.RJ@uPU{7VKCJsRaDV^qӿCZ` GȐke?&Q/W2dڨ:EZ+ul5dgL`~,V]cm(=Q`y5⓿>nR*>]OqrӌںC'DB+ {quVqL3혖ՇM"qjqaҖ}ysVtVi5}UhZ0 G+Ԑ $A5i0Y/|-0Ǐ+7l|FF:7e__tEΦƆt3}l@z}W^LBsA%hѳ7bcQQ:$+l#f)LL%N>Y)siH%@*7,!D)[ռK ,|䦅hLj[TUtz_(;xB6dhӧEe@o'kG;r#\cDib7MC19uOM0_/bXAz/~g-F7wiuM$){;}qoZڡ5Hm=n!2xtp46z _˾hmgʫqwDT-(7ȰF(ةTԊa 2R?:=^ca_Wl׋\ju_VQxҐ!-wRPn 9wڥ+N'RirC;stDvے2l6,h;C7X7w9YraDz次i8Zn@ -ۊp`h{Xgwv }4via2vĭ< 1XBr#SBs/rkQ㑶1+v2ڄ@ <3\xaU#ުY7ߞZkt4Yg]"曦1K1`hB0jAh{`R"6[혵Ytf4Eds0D{".bE۞۩A(5jI̬0ZCbE.j Rw~Y;/f$Ϋ.pM)PҾL~Ux!% z×~(o!6fCk=102fRwhpfߒw~gx;Mp*ikƢ| 8ܗs!'ݗycJp?JuhW`vD#)$9獉Pv_eCt }|dooŮ^:'e\]ZSOhZYf3,q '  S/b\+wfM>߫pYC+p'GϿ:s2=gM8!e&Wq)sbWodtz'gt*PHeɅ6S.VjeLOajp3)* LJ e6&2RjYR:)ژ$iLY )4uNez}vh(-B@GFa@IKg-!8kخzΪ!Ϊjk$$ Fן/j<=XG6!}L_t@KFkP>PN>kߎ:?aC5Mkm4 ,T]ːMx׭ԯܒ-9QȆ2P IJͼqʔWgKl EG!QR1L!F YI$۳  h7EVeAA8gSL^Ζ79U-DyMR;GZXE{|9W ( 6{! U#RkM U+A宯>|@$ @ :AVH#V"SBDZG6GKCVaS- i*lUyҝ?zXeX*|sΨ쌖쉒cܕsJ|bԅ$Z'zZwҒR*)O$"uHjl4T4)~Db1Y_!\ U;׃d c,>7Pӑ݂*,Y .boU̅R j6<,D!biX-xc#$)G9xV.sBZ+%uw !KWY0R(I'Ye>&r߾PB 5HYP簉9Jo"7(5sΈӟ"[~}wj~^cv(\}:*w 8Ϝ?]n7!eE_6 6߬#khVbЭV|c xcK9h&T[m\ZE'-@k Uڭۑ'g+mY_E}00 %A`1dii#K ); }IIdxz62KT]UWuUuu5!Y<#Pjx2‘ż`޼AZ*džʌ-5p|!;Jmզ㚨-w+ĝ}.xw<cVtp"0_0h\xp-7F~@ c.st I|`2?$CLEzCӭVuWkГ֐ EB ~~<57q ePфAkfH7t}FvLgfz{6@Qi/~|x tIҷv,JN L7x0 ҫ33^d͝g ʈ7gfӝ- =swGzͅ*gO ?G~=Z?}? axﳗ/;ŋFo-+W6p8|9 og{@gFمp|sʔ)Pү.Lï^3΄&}of4}> iR&ïI"xq#h&(Fg0_qt9L3|&w=sD}Lu4%q2|_X7uw܏7PIz>!gO䯧H0i$E!3,;A̰`|}SFWcr, 0%TGۂI6p* U`roeV&T`x[+EpUARS:q+A@<4?v:ԼXdE9gEQgtdN*u#ÒQ1忥WrlL&L ՟0Kwż\o3~;@ ޴׷v&\:{b>Dez\^t)F+<(~,P*7cg_ɘ\ M/dxk"8\m쨓=^};JqTjR7Gw\k8z_dz콫e'`< rA#(i<45W ϒ*-XHjKhC ^߁qB^g޸R3Qd&0I/ͦc~&f.@6ѭpl<[9dblSus@TU'K trZS |d }f9<4Jc%""H XcP2MBE2WN-Su-KVL% I! Eo)`K`]pd ӫk~!6 i)zX9,id]UǜjYWRc/?WSYv|5Mb;Ctñ{qrBd\Z<ĶOIMHztFc}g֮yo/n m#+&$aft]|xAkT6rR'IA1⩕^G{!aQFVk[wԛ_Zۋ=n,V'&mPDu*e酓'\|OgJJ>ba"yV˜\"*- #rfȐ[ )0_2sa 8`eyN֞mQ<$m= ;g(ԸpIJF&LcfMfYfm_ryanZC#DD4 !D)iTd/M`CMV!3Zq\-A5E6zvKo^) kӎ׮vOߵZ٠!2$Ik?NMcA{ڙ/ߝv/߿::K@s94ۙ̋\2VZys<{ϙ~%%g&x& E֪eOt< )ּLHn2ݲ⡞Bx͐J[mQjlykF; Bz7f DcMk T8p4+ D.𥉉9T^ZQ 4 o%nF]8˴ģ,:( 5.ntâbgֺq^.''R( ޞs/w > gw5,X6\Cg 7AD)ڶHVU=cV51bq%lPL{^$%vnnp;泡/ -2{GA%_WPZm*`"aZaxNc 6beCuw޿P UAFw T]hM++ lsc o(b0HF𵖔…8G. xDʰ q#qo^m1+ !#Wxd|Ŕ hAJba7 S"0&p/9RA(ôf&>.*39JXQпш0f9>CFD/m2߈@B(0A` JKGpv<ƞ6ԙx `0*L=jQǹ|²O }ֹu~vZcҰuF,v$[*E0P%P$hЄ 'nŷ0"hqǥ+?ĸ׿$B"3o,owEhUFvS3nQ6:ta1}Kt_sy!x~#b`3d٣w={USQWBj^-([!scŋQ{ftW٨a͌˱w[\qYbe%tmcL߼Iһ\x??dTbQՎ+ KA򗰂Uʊb&»L?* Vvw zr\b"%Kv(wN{)qƀhP[&h jv/aE bBYy.wynPΖp ZTV&5y-*+#{6x.Uv9iL)9Bup]݂V砥2Q>v|ԌntwϷ2wxxnQݻ'3W_f%Uv'[q-UuѵeH`"cbh]dr.(UPn :1ZnJ}b|?m܋Jd<-gYR`AV?3Èkm-QQ ܊CM kЦH~R:q\qrrTm(TW޵6rB'm[u`$9^d!k}2D6S=M+Iق׺q8랮'J7!9y,+V֚aƢ" t3L^\ko\&A1Ҹ8kA3nW,PR'h@@14:{Yih;R\q)zWD mٍ93g5Pׇ~=F"X>:/9%=e!ő" "7*tpp"vž .=BhHkxu"K.e@^\߀ySqEU?m-g ]Qrèw<4-X )aV]:ƈpe (>:Kh*I^s'x8$1B\dxeV3JI`h ` +I 'gu^S)b1*@J*HFG,@n wW>P֊"Y/8IA!rWF DBNSdN' 9 !![&X@ @q6j%R>{_Ӯ` ݻ .8

3r?bJ4N該Ѐi!!4'b*@;IQ5.n"6Rʢ\M|X(M'qaY -CF ,_gn$84Y84YǜYDp7G{E-RI9zr 43rzY.2CkA Z*JDb- # pe"qhAHp}l'*J!o N4Hztβl+S 笧3GHp}Ʀ |poK !@VL*v]kL',չAe]k!m@_cHn,~H+[c >1R{g3#j,Ԫ[A1X%jWϨWW/[.u!FOWhT(-RS@MP3C/P%X^]uŖ{:xzwֈ{ [Jvv WLt-p+=jhkZu1"͈7:`^0?FLa{[+Z c\=XӋA|q -à{ݧhʩ!8m];l 8-ٹ0{bruG? {q+1fGS[W][}m ՞,Z\w-ncYF({% }gk=^@j@NN؞|Cw?-Tx%;TrJҐGgƕ["yI4'EvϾ6.= B^&rҔIrCȹA)RJ#^HxTI伡Hӊ3$KiyTv%1&Q 2UV#A'(F"3du eוũrGy<!j(Q -5pxs:ZhnjAN>MꚖ_88%¿U3_4*SЬ=N=Gcފ_+U W"4Ag݈CWl x|:3W $tʔwx,OrUP>oٞ12FQ}u9q߶^cpCWJ&+YXY ^INҷ}݅@,;X7)+9nOxq!I\*~1 ]:t P"xEqE]I[ ;â&$(%V@FKeZAS<^ * J2Chۑl/1/;8%!Xd^*& ڲhILQT< H0-. TrTz*scE!Ť&M0wFidbZ2N^v+K*h*{06'VS(,)2󜬮ObAfU H ^>gZ?g!0B(xzN&Df}`4a $IEXO:)zK@FVj9y-$(UJ4Aٍq=2v@.@h D/Z-U3A4 ]Wa8=6@Y3:VfFE]`9alV9Gͩ=xף&77lv6Nvr؜4.#H?KTK¿8ϩ[Sib̐teVBKv0)1GLX.zSH~qɅ 4t䖛 sD|XWOO]l"'u[a/3ЯҷN8C;cu^?ߊH-!M>0@0'mLZpN0#FV"AY Pjɸ|bh(wTwn拼ΥǛQ,%n|a:ˋGg~B{O.zzPhĠ+Ƨ(3曇i¥p:vCfOFQy̠`=_{{pHdc•W y>8pՃ;$@eSf9)&g"bFXĀ2  ?uxCbEi. _&,/nh_&aa/)y*2h2ߕ8.x(NeY}tOζtO3e:ϔNش^A6չîMM%_Ss]5OC:K!c$ٻdWJK@ػrM[r'}P#RjI 0"*օ+O#hskSy]dk-+Q8/&z{FN+"%f`MGN;7ϩ\`N[1}Q ڀy`ꩽXp8D$Z139\O`i癒yxx4) 9L;9WOg$-7yDt{r jNd9 t;/u?h]\ws+c>b4zjcgCHČpRh9@hΩ!NU3~CcgD)g]O0 #t]N$04xr4[E3(sYXULq*+̇)R|T^Dj5Be?e/V-Y=اU02E4j d69 FK^r 5, Fp)x~)%KTm69<+ _YFIk=Li޶k @g.@)eAtfkq2`9 Yr⎡y+%JYƩ׶Jg}OR$G_; Zr3WF5Ub@2Ime`$(gL.vAQfy2F$sT;ךJhשX16IQ8v2GXzвeɺ{JGKz(t֧TJpuy[::tۋ4#}|ԥ'L@z.nzXM_FwޣFYC4AM>[TGHp;u7m1<>6].?z6y4f/:c+Sq'6JNp:w|y}^݆Ԯ:s|qScTIZL!rZp1ewM[:ꭡTo`$u#ҲGnY4 ޝ΢c0}>or3 Qwt]G!FӪc{sJ z_#x(WP1{`A}(|6I k~yO=B\iqײ9}L,c-yġ.ZU_=8GT(7:!kYge4؋uX鮏ZɊKj k[EdGc7K-,|--Zj᪁ccMCPd9FتW:8W0&z]IW'G 1t\>T{X\8J΁q:Q=F5Gn>4~"3"zgC~&l\|oL\vb~uݑgog9Uf(* jlJikUm֓ =5(kdxoL@A0t.CxǩXgdVGӑdkzqKЎ{sk i5,mU\+;-+r"7Ik9}iT5XKh|i#kA{ P> ]F5lo_7ω(52s7w6@ɣ -r5eN[rSe}xQ/VOWA}#J/ӏ̈M$VꐆgE:J>En6M*ĎDZ'T/>By"y8iOAh@Lww{;}9aXxws .J+$.oBg q/^~KAVPr5ٜMT&S._΋.[^/+nXyU ^"oTzGtaUU {es!= ñ[(K>aRnm x=|llڌfeZ'N;2I6}ʌz{ȃ6yRΓ9 j*ϋbP3Cf cig`3L0Fi͘FR*aw,IQ2!bGiO@vGg iRG|*oo"l@#n#eXo2%*_?ΉQ'HԨVV'8Ӥ]C9tB}`S•>M5u>0Z*7e_ߵ'ZQ S?$420jƀ+bGH xAh͜5!N)܈Q*)sM-U|m>Cޒӵ|>9=CxPsm!I~#ƋF#S>ӞI j)m T)+Ņl_Iպ G =Kt Az8)@3γ&]itzxC IUVL.шhԈքB~іTU DfՁ@KA$@:ӌZD!c#BU x7*Mt$ `~rWf@oMI(o/{ZΎ>[{ĭ(78^0ED#4#MyNI\".ߙ iC`z q(k1b)ac)!V!eO?}HkQa7ϼy̛#fqPסWp496F~(.Q H9Mk 4m\ng>{617(E]{!ƨyH{#DC"; Ek1j5QxT[iȹh ZmbjD]<ޠ%ic6e5mҌqm6Cncm 4,m\zI(tpnT>޼πҝm5h{G#mژp3֙7ĄEbfLVy4IQCIS+8# e9E%!sfL9$R8t2:pbE1Nb3J(ғQ_o4ZzO3 2APx@Zc^Z Bqc]!)T+iW8iPN3˸γLdě"#&|\ΚptNt%>dJQXWQ1r;dq&23x Z';tμ^R|3Sx"RAya5[irU1662PݢqTg8L>[PU3թǥ  8yτW?TfNdqEfY1+XqSF)҃(Q8ۀF@՟2?y( pd* ;2LIfBˑurk,= 1Y'O[.[LwW!?>$Ax";=2u#,32->uŇɹ{[Mζl:|F9ii4L k 6~U4ĸ]"]֒.7\ #<}p .w̺qCKf]m_YM5ֺ9!(m"bWcQ2Z.9۩"b&f?3JFn݊hK* 3P2E̒'p+ob@L})(oWF(7%íAA2(4~S\t,,2_]/¹]]LwS`-dZݭ`qQ*?7/nz5^pVZT@[P|j/BEy(<-!$}y}5M;[;?x5ч~V&wW] n*d~5D7|r=κ$T8gXBt!Q Y!p/'E\1CΕ fL(K3D*B1b+{s?g kvT|8kJןzQZYDO5 m-Zxd!}Bu% OW√FGƭS[OW έG[h+koFmdtU׀jN2YN"悙ƔsCW;WϵhXHht4iCD3H0WX:û" N` . ӱIij J%;^amˑĚjX0 wQ![L0 ۄ{Y.UbNy4{;5Rű^7 X38(JJҌ)rz-|\x!Dyqr; &*=ԸQ P @*huT3#鮒uKܞ,L*W"KQ}O_JhKW+_6k[%*ČǸ[-G.MM^,,ɑ嬋Q<*䈟)<z2.<] ,yû3N6 4/k}nF K_6#\ڊ]\q%[. Y)gP/yp(qz _7B>'Asu7w"_2o;*fןFy,;xbiTهSjcJ*^Pخ^LW?{"j5NTN57jkm'TѓRDc;o~wt>,/KbAdv׳XdFM1nG.c<"؊}G߽](l4cAyrK8(|<t>."*ukTUYKY2ruN~H}ɮ/ZdO? )ܮP>r%mIB^6)Ooh7A A5 EtQG Rٵvk=Ls^ZuSae{u&/4QUkCtECMGP'PjlIHW"C]ә~q ՙxA uʾ8/߇O7ۛjvP=kXܴ-g+x b]i1g;ldEʔmr /M<;»fFkrSo&R%䨿GR 柣oY~d\7;w O,yxMF%_u">D}z."'9)ϵW+GD/R\)Jڀ\Q2$<#}y57L,Y9:j:}A b(J oz7<]>[=AEkCbuc}_ Τi!g5]LVimJ#h' }gonLdr}cEݢUu'ե8k4YF8xK'I_fNJ/ (d*#AB\8xzqb,{ULsp21Ie0.78<xύq҅\S*bnʩ’&"M/z[N?rhPC6a) uCX;N'[1i{Zd9b]]f*XS+iX&S+.+{Iܟc<kPn)l5ڗ!dH@D/e}7i|EmhӅɄTNOde &+&u:s;.U];ZutuY3֊w7>S̔VLB_[މ4Q:RUyNRv%Rm%#c/ts[.,E22cAD±qXw J0 {jOU#)F&Y@ NsV*l% z:N w,wC+sRh56A iRhV n&Aw\*+4Iy~luftGֹYMGΕP?-:V/g")_xVy xjBIe^5xڔoQ"ku>Jփ cE\RR K/.|Еz Y԰c&TN;^ G`?o%ܑRQ~~mL*r?{rLf|,rӢz;JGёx1+f*ُJfTtRץfՒPrr⤧0N"seY "7PRH3 RݧJyW!0}WXwH ^ rgFgSٱQ| R8+QK#N⒀ܖpqrޓKs6ZY,"F^UR 4Pb=)ALNvR-մrn5{rL(3vNmS0hyЯf2V*A6* $τEfCTGRv'՚2>u"d'Q`ųaB38/xjx) 0`CR3>W=k'.Wnvg. q}$JtwЂ1ͅxs8e,G񭹔|u$Dp6/ 8Z-| k N"rl09&,疂``@S9K[휆`,^JFhDLujb.d>n2©&"{ Z]:@p!.{ohfJDdont"#/BWM?\f3Ѝ+2mxC'|N ϣLGGc/-.뛛+ʗw_{Wx1k킛 Ʃ#K^^髛YAݒ͢O[q>p*ęṅAW;Pyȋwr4Z%}HɸENr17^';A0`TKt ҩFtH'R68UG+%0hqa)s;]p{iTX& ["L9Ͻj RSl8Zz`J)GDtMܐB+F-r9&(&6U$(0`iP8Y "B٢͍t HAE. J9*T{b餮E`g7& Ps˸e$eI[ss>dy3`'Dw2h8[pΠE# isVyT-x'o}[.;<U$-E KUTtLY 讹FAl_rY3\.)TJF\\9aF4z S^pg#z^yfsL2h 4ʜ͇X5u'}˼#HU5ۙ%:a%Rর7웷<۾_>! 2Bj~O~Q1\f!˿Lt|13p7m[:"?;OT}5|W>奨2q~q$1u` .ʔI]="%"h|n)6 jm" #%i R\V 4UCD!$e=ɶNqKA+ ^$(I")̮acW&x.~4#=%b")j8,30__4 ߎ; @RPA05"L!B]u`"9IQUilGr0s:aUi=~\8)?{Q >]~;՚t4AF1]W´`E 8r\bFϤnZ7Zi)1Օrnwfn').QiP"Z1Ww>,q}c +6ij#熐hnJN3:iN(}#>f()iC}nCAMYs4J!-y$Q<ׂxD9XI#1~ $[*ꠅjzۣH,D!JeR6F dGW3\#fCPI!#f\*5M%~U{W '`^;&4dq{q>;m,-"K:cWOJqmb-N۫]TgiBU1*gӍz)%Q= %>AWg(\r%%[e9@ՎjKjf}q,if\J dǹV[^9fٵ|DwKZ7 Qnua͈CF0ڰZZ3]_uX]9\V=HdkU#qIVBj:ê{gv2 s$!bf 8Z1|$hoN5^\Nj>@-iwֿ а/7Ѣ}u;x0mչ}uֶ 䤇l_I%A}xwfn' &Kg&Bbhi^Q@喧Fb#cRǂyRbgm-C,~2TT7FR<`9GSlB]48#i\Ev!g`rp KIN=5fh Ό7՘Fk-@n,>Σf;XQ %=$xVN L75;NW07ڐr S>^:@s*zS"Ge| bv+Ee?w(s%A&ic XHS ;nn5O$=<'(:IdrZ)Jc>(´N,Lzx,4L9ݵv21kEt=vN7dqGOp?G 0Y>=Ί"8^㊝+v GX]{u "ExƿjB!Q XK=aȆMf*'OUm뗻NWvQqxv &'x>Mh[v7Obxv&si/_pT:Ƴ[Ir@y}xgh/ZT㸘g0{5 /ͱlcάBYJP(n260a FϖYggeߌ=Q??{ƭI.^ "AO0]V$9Hr~HIH!g4vib߷H.gmF{~>`~l5ۯVۤW !|5]<̿gaQl0Wΰ15UUys߾S|Aԣ<.u-kmzMwli %DQEJN.zpw'U # A}FLO=PB1K2JzLYFfz5 CN{to[KѦ7U9]_,R:+;(,&E5+Hs֌#ueEN+"EKCӕm:j9XNc-(ykFQ;./]5 NWGTT3f¶(ttCzer72`쉰}[/k7}S3(bLx^. $bz!Hl't3޳pQ'6r__ٛ=odPNnw M3<nuhymNu61Ӌf%tGH DG~ mtP%@S3`=.کo|y9/`CKAD'ZbfB̂=+5Kִ$ƻ" ô`Stl1=[>GƋ_:m I萶M/;x1X gaP-,w2[Hmim-"Mo\NkY$95É1OvGPLz$>~SЉÍ" 27'-Tx$:k i Z$Y˜'YE$478.N aRRRg PE .9EJ,יMע(+rZև@Fiyyo/M'.sZȲj2!W`ـ{BU/tύNcfڶs޲J 2 3mvl6?~j4$χyqdőGv^.4͈Q:RZLqL1di05 ARX?mo>.xyy5NûX/&ЫF (ƞߛ5EkfJoV.o07m{' Bp^J\$yj>s_폦 {L0tO3 6z 9{_wI~?UiHHq"0OF;UV*þYbȪPQ7sKM6\xs"wȰ%q)RR =[2tM$ 8&n^:5F['BJca0Kvū)JR-dB!|t)0)% R*s PYXAN 2ʙpksjkr,T<y { $@)X:ycB46$}]}v&IuJP!A"YO&Lu(HH)EBӂ$k$XeaG%;97>>l>v(V G1,sT[ 9qFXE/4bN:@Qi4cH h7dyRjk,5/ 0v3D:A=ÔD9ňt(CX9Lڍԩ?UҸT7%^g d:]kGy5EU$ՍBJ6أe˳E-o:qtr[V#tz>WMۍWW LZ)V[e^?s9 Z =:v! 䥞E,u(ܻ1g8/r`bU=FZynmX-x,08$09P1UfHT#%2MEa^@l-u0%kϖ<#|ؕ ﲅ-͈Ӄ~j{xkrF FLP_1^Hb:[-d>dT4~3y=$ _?lx|}{;̬Haeon^)ga 4Nao8? b*̰]Aּvl f%d} KlL<.y/%qs-kk9!Gkq7s-3s%{wz`F\WX@83Ctpx= #ŎAO${XޕRwL{_%W87=X8K,fNӄ#`@qnVu5xG{vgL Rv~zW\YVU8/?rzl=OԨ>G-s,4}ѻ%܎{qXdn웪MLBGxDBf2]7A*qD>J&X5lh$UmO:f0㹙g\iK!ePq1ƼM_kFp0|޾=<(>xbs{+Q@@yR2c.Ҳ,xi62c1DR1f)6jqbq<3̙?u^ég7Xc5Lq/b$1JIdcEK*a q@}.AŇ͂#oQ;O2j%'TEr[9jH)#LHC>f~]gR,Gq,h"ieÀ;S%q!0 G wc~ū\8~7uSY7uSn*n*J qRiRf( (\ؙ!G3eWD]ūB *^~pO}gqCR8mpӢP쎁!(V ٴC2M34b90Un%)=)jAt;T6OdV̜ * 0NAOeN`"ՙRZZQ@ʘ4Ñ3<%͵A :J)豪2sZθ%2IQ!(@HfH#{rB2Hb;" 3 a" ce@Ʌ1̦(ZJ$lSZ {r*$ܻ杣:%,uqʣQ)QI!^*̙.sBP&ͺP$96Yˀ1Ԡ0<#{ 0#{ 0r-*m*BR^bU)\@²i&1JSc0(Ja5':-OhCA[3LījO:Sb%. :G7Ɖއ39'i~o'-f|w { -ż')jP # )y;ozK^ qQ梽ȳKT/.ތ/3)] n>wE=1zЦBe@IUUOM |Gd=[fs%[OOtS`KJUr7Fw4EMQ! 3͗3˧;'p`3Wcn}[Ov?DxD2kq5!Wmpx,|''@H0 @ 3ҵ{ h(9[YTBٽQ΢.Iȟ-n]IX`6X(>^榀暘OLę10@R6|\} BŬDZSR4]kҢ$f R'Z0b{]22Q 73-7 7h[jH˱4"cg#3NTv8 sȢX"7ϺSΖG6smm46o2Ӎ ¯K{3-{kb+? o>:,k*e>t}M{7 "MU;ib`+xvʽDdC̐8zM}RKjH L\a[mpʬ[%4FKg o&nc ތujs_OwVՃ S$_I|! :_ݼJ ,mF4*]'Ǭq.=g KJ4 M:9Iǟlab3a=F:˕g/+5䪏͵G}x Д@gGw+]FweMnH0eg[m\#I'9,ltpHCCey (^M,6pG" $Ld&2 8q;ߞ=G4oǶSfz w ߿oNkHGy:,$3]`mA|(dk)z _U _U5&\}(POPxzTނ%>P%z@"g㤸ɗkN(v{kU~ٞ|E8NwNG$jV.β$֜ NB$MAP`ϴU(k:Ј"eh-oqYkeYKmS\ѣ:||HVBE@[-)]*8<x6mC,VYTV:UoǨ-HmmŵEWH/ zfLk}gT(!nxw߄:cLgS>ZuZoo܍M-h uBۋnvv8#wT*uAB g*%(%^ӽ [M+oa5yaH_?e1`̮2EEAO_c"çj+q8L[X+ڋ-U#ĂqyͺxuWEǠ>r%ˋ_VфIמ6:/0>3$ SJG(Û%Yūkvid]4ûasf28ZRSZk(6OqM}-^: Tq 'XF=%x 0ՁI*$z=ZZPH˳*.5/D(X(^Y4BESLZQyĐӚ(j@2\c\/@K n48zNJgX5Yfe8q~{5PT}*;]8Kқ}?kf^C3+T-}Y΋U k ➿X֘q<5|y]/=xs[ n1cS,٥ohG 7W_8׫ ƎvjF;'} nn_"ٜ^O=^5랕CE9m*h Mp$]T3lJƒq%. *:i)u]҃?eT$D- 2Q WBE^R&hI%+&k ^/3*K ;&6RB `fTuʈu_sB35N(;uv1,>u28cTDcRE$Pr Iu4'B#J,q 轹{D2ϪYњm^A'7u} ",KqG)LyzKJO"[w{r)]lVbR(we +J Wm0Ս UUޓ E eg;V6J+1P. SY'u,%/oTxŤ匽ޤ~äJ&֝pz`ppذfaІm[CPݮzC!yS+cM͌Z6!}Kr+@ ÿV?vl>hj)R9Oz֢(r&By<Ā+@fClhg 63 lؾNQk8v^߾z_ig/gq*S788Ұ<}Ii%!h-Ǝ] ڀ6m&g4 k3ސS!u ]mO{u4MZ t %D]f; h9T+Ƙ.wk%jgi% ۤfk_׾\6ͩ AoEӨ=Q9cd>ą3L 0& )@j  9íM-< 2 iJPBs<(Sڼ{e #rxS[ĕ -rZ=qٲ^Zn0ZjÆ雲wlM8D*%]tVp\PT_mfZ3GrL_G, YԚ9ԥ-"7ܷLyx0D6G}TZ[9=x7S}5R63k8|F KA+&G2tH}:/gU#Nn=+T}:UMt|d"1 T^ReB HֿғɓZRЧȖC+A\Kh IUHe/ 0֋pSn >J>bäJz\7!7Lׁvk/]DNd-C'xH6suA?:|T M+!ZK5#ȑG5 :i-Z)ޙ ÅmO-%t iJ}1GuФtn uA:5Y'TP\-z |:rkPAWt&i1Y2ԫe11ۑUM(DY $+Ag_lVfͨ;M&*x'' _ pS9j[)LM~آOW;qk ~/ iҴЅ?eOBDLROjWvNjWvU==+<'C#:2x(̃`6(ѽff(* tS\?.F#8hĢ;4bab' xrl(c2s0*:| }ɷf5Ȕ_%?}c5N:R&*h!QBe&X3 ~Z?2&ni;gZxxC(Y?X0z2<:T^)Xiq>QBB,n!IHWN601<ʀ{Wp}nrN?꽡m_J9Y,5%%S{RGexחN61J5;cП~ݟ!Eߞ]d]9oy ~ ۔8k~=ckP-m[#Rb9ã()p[[bFl#@2b`-r:X& -5`i $z2[Q憕NzKjJc-6>O~~Vx:ěQ~qxpڛ/dAhQml?__#Ig)-[͵O~ǟs/D*ڐF1n@Ypat=gȖ[(E~|NpI@ANDzj]F Tł6xL / q\wp79E-XAlXj'sFd& M2emӰwH_e7w%ae >ge$d-nI-[pkXYUX$؏~} eLAǔ6G ׊kXhR(hiVF,TcW8hW"MFI̫ȵCsrjU "Og`SJp A9#I=4h>ɴIÕ9v>B=lPGڠht>F5&l }SJxDA%DToh2ʎR0ƚJmrNq##پКC$c dIDv6\]v5ҵ]oUu 4P5wVH^7Ԃeg**{ަh5Ŷmr9] H%8B |.~2aDD*.al ^ew|O12'n?Tr/ukG|G}s ~烳lKLZS cСUsYη g#cs΃{~ܒZ;fY|q8{?Ca.rb癏-Gcj«ڐ~&J(UsRFh W 1y‚tDSKR2'J5>p.88o>ݪބ v1GG[<|]_̶]˗H*){_XS۫69I3DKs"@8*p5TgEaDsGv *\`HpA1!@H䒯Qou"Jp:R `{&.>OgMþˤ|\X'ͬ jY^7զ=M(G[N,\o?v1c0#JXDːDS94ʥEp@D;τw$JV&k &g!G,1QLZ3)2i+$,uLAhƅ[Etr,Z 1Ku[֚"!pGPiB,AYL3[J' XqբAϿY7wn=Wsˏk?|縚rq~ۯЏs`=ZΏ ~<H"F'R I!V(gKZ|۬>>}m+FrMeX]^ } tX*#p$D*7<*/KZ15mA_x`V9 5 \^7!9IKioIs8\wmG0T]pYHy-&E?T:fzD*U+:,r3'4ֈU|^UD-1fȵ/ǓvW{X}٢& U?i=Tǭ"M 0 <,xPvl%Dzi524aӅzg &48ʕD@ +\xb03 ~}Wf`m'LfCZ/ۿ]MRQlcwUV9Nvr薦 @$9L &kaǟ DrDU*ߖv^ ="aMARV= d;";gxs;so6WoG-߬"Rr;mr~5ΧIBdg]*0g}b6g.gd%N AR1'7|bXłNYsĥMA䲥ɈBKNę ]ImG*icRۺ? [;mFPiz/=sEgS%czխ]V5푑'aKR9Y1;t G^!=9дB.$`UP&aGYZNc;J%h4o8 B-O|ljb$'ˠx)Nhe*z(E|pr GP./D6"\wFtmй.?6$k.݉mDks Mp`)ٜi,hgL>~`NNӕt?@˞1G;Մb;j)J .4e<8ŅC >?Fדl`+_/_S #}\;`]%sq4Z :'L:k"NI,W,:) hdke21Es["yPN\\ NmZऩ8\}6I5K,e^/}K3TC9]n,߶ǁ._+RX&pM(}Uj(*w4HzT>?E{Gr)1tC Xegv2P\؀p%BG߂PEPU:=RdFd -"OhVukT)pRn! SG#JM)e0O#"IUJ:$=Q/,9!pFXmB-DH FʅlX*؜MH=>eHUUDvsis}/>g;ab{lꛓ(^\(n2w7p+'ݍ uwzڹ٩ :U.sODj1& uI=u`E$@m"ZR"6qAJJt*S@`EPzHBe0 <玲3GH"FB#aZ:\w3ݸdڔwz ?`v2z[[OI8y5׹@a"B/ RIʌٗCI9 ۫;gvB>LaH\=<W\}Zrh\-{oqV2$ G87Kv:W|s=/vV$OwmִThX 9 {7 e:tc/#~xdAHAmtԠ)YZM 1nC1í,dX _WHcW?>9Y}.yQ/yJjʆK\xp&&u*qa}-={Bp{ ý_b3OGƼ'_ ϬM$ s jJ5L S)k3+Bwc7B7Grd'O?aWێױ #Ȉy=+fV5ɉ0I{{u&KiDIf>`mAC8pکeЎ7Lgw8E+G A)׹ogw*Nl*SM:peTԔNQvsJ>GO8.L @pKo!>iˉM` v=8isa^[pYK.xFk*[ [d2FYI "b]~}U`FU"laoQZ=ǖPt|ɦ8  Iw)4 xrjf0(_ q"tRXgr_ОI!DLYΐ \ObIk)$~zH*ApT mBQ8ڵVxGAP eݹD'] o2삠j+_-L3qyJ\ǻc j`Ҕ)f*Z눵h*IMb`N .$v7|* v?jKa{A $+f|A*."Y^S.q<gPW b 6g 4ф[+Q:9@i8+xP9& *D08[g (mڔq7ɛvskfmEy{xB|^ oeB2%޶߱w?n_^X?Ώ-vt0|\xb`̼§tZL).WXUusyv -c/a(1j]}F8V5)pj_#bf^) qGk$`h6pi5=Ǔe6^?l1ACdS~jrĘ ׂRDZ9 qEja 1^^k^޺JԊv޺Jy|pHJ\De,vI##97ID'ќW6JF)ZAkn8?%o-༠v iWrSw(I9jO9ӨZQZ<]c(BK0MD0D3Zȸ? XڷKpO*5,BXDߐD29ٻ涍,WXz٪PUzHٙٸd^vR./6'!)'{Ħ BW" p~P@҅-Y)Q)'42 myō~B;aymj-/4$z0Ȁ6BMIv?-S֡BydH]Z/1RxR7J0O 46,6*J2~>5 (#sa{=Νr?͎;͍A%* jC]FAڒDc> `$AJƣ*x5UggB$wN$vh"- ۼrNAs,ς4ܹ&wtCB+wtC~Vhq{r$9Q)yr_1 y# BNfTޥwi'IJ;T+&4a#{E{IyWHE7{ϭoF%o\vȔ[GxQv |Z| x0a&\\OBP8 ]-`ܘb_ | ϭ[503.*]O0j/ܭlcQ~?ͤ ~ u_jy:ͧ9l ^> ۍGᚰh"++N Ƣ 8)+8ҵmC arl܈'>e4 o =ў= #V!yOAVA>~pf"dh '؋ufRt4:] C~t6^Q<{ àn!%L~tfMѭuõj67m'\pb1Yof?&GzG¡4å7t`)c`hHv(ݻa,w#ŧe=t).ְ=-dG+2̳6zY52?2z byj54}VhXVgٺʃ?~BNP]W/ZЁ!E5.vװ"=P҉+QyɖI7 ;,_cQjxeQ8,|o20=X4{z xR8hc∊8_9s熥Ʒ鰂܆ßSe$\0xv8lЁX. | n9/aHHATlC#(ks35d/WTNuy(B)-BȩRLuy'zzpR^&,ČpOq󃊨*w~4wg'6|K9 0Mz7`i!e+Q  s XroU}+{U9>1Wi^܁) BMU8B/7c/QB}݅-Ư^=ԜDsAzUR%eU}~scMr\u Ks~}}8y"eQ`ݛRMJKAKR ZLahuԋ2 ,*-bSXޥCdI4jŔz,H?VƊ!UP `r'VJd c#..qؓiXsb>$g%a6Ux{"yS[NWp|)%wUW(P5sa;PĔ\N,4%B3uFQhM2 'Ě L29%gNg:H 2j6psH @siaB8t ɥLK!wghOX֛)ߌ이5NZ6VI:(b|eiHs5X oėtGH1fEt=,yCHU']ZL%t:8{&-u ]ڄJzDAs_6+Ѵ;s4C{3[2xƧ<Χ'Df9ldXO R+F5Q2Yv=v±Fš~üڻrx04{Aޔ2A~^ߞo?sWD2O%߿S7X1{Y"| ,8baL;YdXPs:wcxC$xDk Q!p8DK}>e0[8z k9fȇN JiZ(,}y܋64Jn+o]l| v xlNҗVyO-i j̙p8Ui -#i$V E1"eԧT VgSj#)A`N*n$k|-Җ ZsauQJ$p'yTZ0uRj*R>OAM|?"i%+(d 1bbLyliR]6>l%%uBxG%$a-,qC\,γRxgr0h) 1zѠa씆s041.ht)e`@Rv 4,J87'gNF0 ][ cWg:I cy Yn0!`Th%!jmJn- EmWEWb)H<3,j%G7W߻뒶{w]ũQ  8ƬD/0J  1pN8JG:{8Ɖ+ՁjBma]n$1,]]%>[iY7jba?Sѝ3J#`S5'^&Ur6]p"ҕ%ЎY AB{Ɩ1 [8=*E+uWopIQ@%b==yG;͍A%*a~QQaVk`DI)m_:Brf"&htVY]Y!\Kɑ-%. Tx^T=9a4!/pLirJ+0/5եH`K,2PM-TI0\t .*U۰g毝|Vlv{Mgf[ lI,K߲M0~0E`|2$иX.? 2ӹE$uʔ^1*{ܠ!y&<$\ 3LTLj?Ѥv7tZGJκ&F]*1ʤfv7/v܅'E[dkX nؽ6mƇʹJ|=׻NkDiqqК!S`O\/Tîh ϲ(g4j.5LRbRf>/]@SLꇏ ?lۥ>yv17g1ElEc,/?=rKy[%\xZ Xr۞8?p} cP~{r}gR10A#|6X?{"C<='(FeHaXhcRQd2];*.kPx, k"BfT_9hg'pH+ ({7YTRuBAԴ񢸇)\A4gB[R ŭNGR Xata\v(̆d`F#tfcS[N|&kT.)L2ŊJG[ƙOWwTrylB Mp2jZxXa^#eirs;߀=bl|R3 1Cw}6xJ [+ӓ’3F-Ьam{߿g|}Eo$k&gp^zfhhMeִ\O5D FC-;whO`񐷱Rc[3*C丸t}{J&'53n Xl˳ ?,1u, u+핝ЯFՕ? "D*yjTBe}Hd/7wy~]?xC{v)jzh=-z~15}An#f&a7kUN:7А 77Zzez2~lҚx"Nht#qWd`LZ*#J@B6A LC?6 .sGƺQUC%(V1^7 ₈ce&Q3#1ʅ^;kj|_b>}Fc6wϯ[[l 0b7`@ŒC,*M)BN~2r^f.LvaqB ݺQ?\|󛻙Y&M7w_x́x_lǫ_֒ǗP5:>4픬Ğl$`9M \ ӑaa aLdqcDNʚ!f!&6G+QGxss2+Y@FxRIrx6a~#ULjmJlL31E t DeK/YF"L'o\?V1H 6i4ɼFdƛ*Jo\0\LݸR Bj#gTDdZ'bs#RENjLA˩ .È)f1vSkiJJ3B,l_t/%=@R.34t6߯F۬x7J׶.%!ִ-<Р%qKnh#Zn7c`S*6lɭ+8S,Im0/Ys52p$5ߘ ̜k* sT^p3:̗N7Fp- r ^Mlb#ȉS r\wleڒtr`c/G}M۹hs^#ogN2ba TJ/\ n.@![^h0˿'oL'ɋ׀FV NxdZZpkf+5~iޜo#)L . 5Y. E nBtK_ijQ8-s8Ep,׆ix̧,|ыVn6AJ)z}[?+-g$wB1(t@)(/lyH9K"@\.ci Vh Z|O.ϵ3CηPl;&g ˔똘I&0ї΢a]&ƀsMS_H[i99)" 0.rL8(4yV)HXM.0'A08u+$gpXLZ4%mz?٣GQ!Ҟhp:z:`9S&)5Iӑp;Y 6(4 _##|B9Z0W18N>Zf^JɸyE%rv\W#(ZKLFō[wY{&Ǥd 5rÆ@ Z.!FN]S6V ˲04#.)zC ex.mso͊*1z-E@?d)==?\?nrgPǗ8 OOs*MTZ+nIVCczgzEK9BE" -LMA0mihh S`IPyN=~:4222 (1Ak;T.n4StN>6fBv[e!~*.Gw[ޱ£&H LbCӎpގ wـOy&apR&d:D3'3o58UQA{H2YƃhLhyfEf.dBj~߿u|5@E5B$AiX(&ULAe?&QuC?hQbcJȮQo-eKZQ ߖ, Hy 6uOyx_gT~͆ۡ;FU86MvSk}hǻqi}Rފ+?,Vkz潬J^C|Ph Ͽ=>[+41H>>}|;lw?F<%=-~kK} F_̇9NAb5!vykt/ކ/1A HJHWm/h64c脴ak8?!5b)hݗ+bn|OKmV8YI?B&Ec0${F:m?NEno mBt;ĸ ::%c 8uԖjo*1OmYp.Q…즤 =JtȢbJs\fqC̿_n?gFu퇹 _3;ڋ kPM0$m[gw77-9pK"A8kC#q&LB +rpC~Z~1f0ž:T%TG FX>Fh Zg]|ڡ#4gANQUpif~m`\dށ1Ʉ! *ICtQqF4M )"k똱^]̽CC :h]dVr AAT/V3"d{iƈ-Ɯa7^1fDxYZr=Ɣ`7Om}=\=pC_xڒz_\ݛ"ּcD1ӟ4б?9jL+T9p:/]<1ټ- ׾3 N@}AA@)'/@CgZh_==Q&"@p3' 0N[_}F'6P6sn[Ov'{WFr O;Y>lHCf:ExIC7 R>n"Fw֗YYUy"F(~=3 ?nI.x멘+qD#"WeOx fT.T f}*Q$ڬ9] =\x`rP]N3&VJX\h{UBy4ca ILͳzNy~V:o  [ug"/>-^o{c6/?[RRxÔGkkUd_cGր&H)0C=_V]',R=W U:qѨ?9aC8wqA*(WݛOZkFg(Ozn0[lޚS rCYdYMH&|1MؗWgkboUjMEL)i.kڭ*ED3nN(#_.3U{>r!^R!~m/ ǩZ]^<O< 䈌(Qa{ JpxB?SOGb?.^8S2y-tv&ފߓXT5Ywfvuҽ ]@3Kit3k/ytk ݳ5F8D,/S<_笔4^g)!fTnC5Ar8L5[bL0Q;gx d"9[vh:a~qRyR-CV,I5K-Hkv.EDHGpAqDcQApȱ)IA\\ (2ևwBKk<"uD4\)'0(%Q!!/*`lBhe+m;[F5`RbO'ߓY_Km,_213/,*TrjdZSqUgᖅQWky<%-*Ȧ:UBLvTc2NpK OأزѰvzPy&eLP\1F:Tӈrah 5fC,%Tb0V1ξh rlܚ9$=HX--+ǀ"cP$(HȨ{ A`&Ҙ2dd}2KLr%s;T-F*Z08M ǘ5YM0Lހng%piT\Mq*w7-M@\"T^' Pg@nCɼCN%,Pp"S/;M V"Sh.S}{nuH 2E(ybTʧ[?: iXFڌ% *S=1Ì̞Vc:-3*үS ;鼀y[R+0 cM0L_|$q8 f_\%JR:Y؛JR5At;NNFj2byԺdp $uz0hC8J`TrKE7ã ̎zNof4ъ&zޭ]#VbhFwVc>]>(h3$R](U[=V 'n}h+hO ZI%^5l|ۀL(i䳝 G*Av6VSV;/<Ѣ-ߥJDO_i2z%i@z6Vek=e97Sl%ԑ$_pAJq%5̨0tt&%>K#QJ GvZ,$bNh-QN%Vz*;O;#X<_ qu{Ƿ0~{ͻ&|2}Ī=a7'ǁ=wL\N ؋6'vQg!,8!+U|`CHrLR |: 0U7EKVE S^G(-] q hT La%S =z0hJr1-Ym)hKLJY4JY4+ 1(Np3S2wqRoei(K[FY2[Fb]qkam1E.e8O54ˀ]_~Join͋i g2 lŋ͞QB9ϓ^k2t*j| 3ZfZp pc=*cŐ2A@0'~qk%Xd)툰Ph\(͏;{34H'G&׈I#^5|0#g5p)#&UZm0`+32y`k=aS3Zds w6J%8) 3-ͬNSEtv d#00F"7ƜЁS@CL{Gx-cD39^ԠQ/R"79mCJ*԰th \< f+) Q#M-]~S}Sc}笆Λ %6;ydܒ6F8SU"`4R'h2M=)& pHja;!ڥ< vB5lزRHT˄JMK\#\ӫ/~܆P˴a]JL:Fa6[%ܧ܇Jb~K~_W0 +X*{(ASMzW,Kco~Uydû/j4e]g=sy#cTŠS`A*X;`;D٩kb?f5y*~DE]V*E,ZxÙ6R5V:X"d[EnRKXh c1 5Z bk掊XZhoqMA)JW/dɵz(MDo\epXד/YRD x}ɒ"3)vJK|NpBonfLUuVjiDТg!S}*r5S[P^h3@xq&Xrl֐4W޻|p<7՗2\VC )P ,عA^#&!%jˇU):i65 9u7!`֯4k48zɟ)p另[x2(տةIV<5 kmL]Ks9+ ]fg#(!#|p{0;1:@M=voMT"dQU,ZY,$2WûE2́\cz@;@Qј'JD3+0JA+5;MR/2d>RT hbJ NUch L a.y!6NF!L"Do=Tafcilv߼Б{C:QʃMykA [^ $h ,t A-'b^oɁ|:TNoA6{ɶrC%ݝ[D;E /*FNِjn^$AH„,Q+(p"Q-mkE#]f1o ]lGU:Y.L4A3c =G9Rg3uVݒ\Y̅RL%_ffׁ1NF~ R3AG5HgnK5T]K H ^Jlduvu58ك`1+ Pu|! ѽ\ރt|7 67ϋ^=5wߞ]|]H ѸX3yl)/_2 {ܫJXe; tA4Bgf^dMliW¹Ѧ ‹mC5)rLg`K }$[.4InOe !C?N\'~,Yc][a8է&hUE]%(%ITzLr'KK}\w8Vz 1gN <\ⲯZ[[ -c T?WE'"f݃ě.FUSTB T_`e]}fBɅZwd_$f DZ(=D w(/3H/L{j?Z8 _;PL$|O*3^%ݶ+;m+T~NI>'u Gq6 RIyhǵA&PpܧAHM\h#"R kbp 5[^Uy(i_;>5!V'q;I EAtzt}F ,Wz(U8nHB(ǔ* tAJ:#QSax$a bxV0Xݾ,l@lE9஫ `ZjՍ Av9hvK8VTPPT90l=6Sspe$묵b"vwt}Ƹ#NƏo}jEt7*6U{kqf~-1hȥ /s:kyA| ^@w | ̕_'n K *4i*vӫN۴vIF[Mkpi];hʵz@ :f޼Gf8\EzhhXP8KgK8׭у!Z.&ҿ[{f,< IgzwU?/i !Cb7|>IFH~3͐q)0BԧkΦ@J!:^'=h|14U9=&!IM\F&Fg7Pऴ'SnhNZ;]׼g@/)z5,a [6Tz f3%NĂ숂,ss.a#@^p)@JI 2‚ (Y' 1^rԣTT*y:ۢ@  ap"ڋ:S{"Zo<{r+2VqZ" }8SD4-_r drHT:RHPzs=AMrJE?ە̘;sufM:*<=l$:̕\!"-hwa(eo}ۻoc?øACaI 5WoE!b+t'tw`٫iwCY ͧNC齃V[gAcZ0ocصKPUթʭRXrzx[i]w4Rt`z,[Dzz S<\@S\>;h)LG]sV>w= Iˉ! 7кmWOϵ zgFAͻU f!'hTTu`(ՋYX1h$!GY`}WF_'>?rh67a>KnݛQu_:yvb&9kLR*&N[Lr, @H~yF@.*KF!@ PQ%VTTG9rv 5g:tǃ2Ds%A?F0}F~+ޯD~:)xVz\}(guZ4 _"CV |zC˫pi9Gy}hOWhMV@ıXQZX7w8)zd NY)tȜ&TR]uPT\)U:+}н >+B2 -]g: & zRnsŗR7aWBdžǶGof\`Wp VRl܎e)~8\je =֞rJyZ-Uլށ[jY\bbRK΅9~{n–nnQ\n\r;(/h*MT>SʹVd'egPymmv^_Кds"?9&[zv3ŏn&"Kj q`^9lbc!\PJk%$ەU"W絕M[w&+?c%~1j1agq@X-+rp =ʝF6z`=@=XQxX  nu7+8M+G5GxC}5נ.õ -Eś ]C/֮TTw^u9o1$PrnnI_ݿnS^QMrZFUIaiŤ,lWm׃3#u4[-hSrA IU7=YMعN*"J@w`QoG7ϋG[(TY~钳' ץ 9MBzǑQ5HpA8^Rz1>P"u6۹_Cr;y5/7+Eo6p̍ߚ69'?x薑ln*bRW>,p5bЎWe0u!y:9{?wJ~?굉Vg@Dd+ /8)+H)lv]72K["[,'3ZJjja+1&I^(aʅqP֒`2xW R xoE!QjFQ4vR| Mf4ͻ>(;ӱh"Z]q7#<E'hg[G"5:wd= 蘴D^ZQIviwD pxAܠ9p [I0&AĔ"in~`%9` 6#%Z5$Vw`װHb:Pԗ7)֧ågϱ7덄2lI?X]]yɆ0HyrgMf+T:bzßrQy1iV 9\:,"fIA!(]~r)وP #Ex \W1q1@O_#ސ]1׋$ k1Jw`{ P^|f3AE>ƾb@${4. zpVU2P:tLkDdW2P8 5EqU2АL8 T A 1zRG ѸeO AbfyIEeśTi1 N԰GyZq\Q*+$Rq_ v Y7(&<'=Ӳ$]c`OBeR?M?>kB-Xtm6P&x/ƿ@+|7wѻM<xv~{ߘd1"jɻԖ30GZJ0v =qk_+gy,$@>a4u')ᤆ'=J!?%qg~A<8;r~H* (S ԌDءNM7q22&#p)O3,;U%}\v=;Vc GG×ڂ>FXXk#+ioze\jq/Z9R\}J1VHv q tH8*uJldlƘWR*.zEV{EtqhITiQ}rŘE͵5~=ﮊ_ap͝Y^:;h׳|=yrcGL^3,j.T 4)F2v]Bqq?C?MXH4tINFP*Oy.Jt8)pU Fy7(KN]ըf0!;PfRLG;QG \gJ c"f5# Qbkvw^ϲ~d#FzZAœ뀐hֈ=v* ' Τd-VѽSa+ lo*}&UTRq1֤a)C\SӢIPpRyPf#n 2~W՞#A3v&ӹ}aR빾񇚁.$ (dځvLL,!;Fv:8[EO4Tu!!_FՌ fH,$Tnݩ}Ü3:=u(LNq>'&$B9_f]vxi85UEܞR 6b״$vVj.L~]i!]XOA*S|k(2 LEf̈_)iR:>xBx,\=ׄ\* UʖGz4VrZ##jc=`ܱuRtru5V+Y.[4 ,]U5fE-g(QZ'y$ usgyhIKU!763BO'pPjx2 =-OđZ#'*Ё|-x"\7L(f򎃱V !KuOZݥs4"gCl*3 Mj̄zX=Sk癛8cXz<{`M3Kٛ+NNMmȽTtnD'%_:ބR3m]J^.uO[+buƑ` Nq#X'Z00Feir-)ҝ6Zρmh&8P3ƘcJ~16m/ +Y bBOD \J/%a,w$ ^VW `~(hR<EbOtzn&],Y1b~ܿe1_!n% ù9NEf/Nr:噆wHJ:REsnϛ 䵯RT@/t1YWb՚˟UWWvR,JءFv˶yu:_70n&HCP7y8y6E~Ct YB!gJ3N4f*E:sL$#=uX8F[,95H/T#O : ɴ4DVS5)q:$waNHdس4 E8+͟V7 +A(kHTQPH:PSk7SwsKsE3*A,'"b+%vHC apΜB D#n0\ r0Dhc К^D s.hЋ+`(?h^5E6EJ8VTiɈ$<$\ǭN`|y.eLWs)M@]45Ԡg0d_Zx𩐚ꅞ3aoگ&Wg&poU.gQdJr"2e r4G_2#T8 #' k尥ʚ熨6V!i T҂ JR?ŷxuրN5ueU^ .Rs ~yw.'oϮ˻9FSO兒7_ރzԟ56*n8=dfE}YLn狯WX> }ݬ1v>:|N.r =+ԥTr7$T@w!g%%ޘJAdh7 m[:<0kƴWco5kE^0krB,a7D[A[&LA1MǟU_5w&{fn]:ug|ãFh3SbY`3s=Yfw+276[n^g/p1uIz~2øx,яAkJ&N9Or*')L~ßY j sƣckEL;?wRt);`Q6u+wn׳݆/ ?R SR)R14O.LN \r" *EK)g ?: 빿z'jjazG}=4^(jm]=+c'f~tv;n }-P@(wy!j Χ/ E*aa~=Y2wgmiޮL=+7ŕ<FߝQL)Mx;Oz K#<*,ޞ@`IJ1E,+ukԹS/Vn_0&Z=w<<vV>_p-444R=1kԭMK0[;0m$fN"R LջjCl%(mt%T:HHUgӔ8ٻ<ߘvŐΘD?ÈxM.&/m!p?3/IM`~K?إ<*t^pN Zbֱ>_{5TRJsd:HbAQ}T0w|mOwm=nrY4]f%Am'ރS43Q-vů*.ڥT;!UL=PurGZ q$UϿnVH+2&n!Ӫ$]l{!8)8yu 72ʇkde]'BTDWhW2)c Ӵ+*B-9h Q=' '2u"zʩdGl_7ٹ?MHV vNÎRRQNC!"ކ E dT{CK]ϔQzQQZZqEibPz)+ÙsƥH=';o3G9:EYnL_Hd\+>7m(:*E) -PJ"@*57,l^߬~,)μ)Y2& [iƂ3rEt9-DaJ^?=-piC)";S{$–NXK 5K+6d;HRƨHz,QߐP-'j9VGq]9e/Zqy5τp68Z%MkpZ wqm>ij@p[f2Aʶ69ʉqt4˧WM3XWMz8K2h1|;V^6<::#h ܍I|,|lλgǗ-9mBtSvbP]DF5G!"a)ڲɉ%xd>{%e9Ww$YZSv囆J)ߚsL^ZRyhU9I]RgㅲNWr TتRr3[dm6wQ^:A-"/Pg zMݚ'A5=߳ #+YZʐ84%J-J`AQČ   1ӄ^{j`xO Cެ'3.îj'2ɹhٍ_ד]~=.J8WEjOČ'~;yH3X"K^0_ p*V-Y8B]č?}W^Ss2k4Sr6ӓp͑)NĎPT0JN`~͚+m: 㻧Tz\ RX;lS0SJ`6ߕIj.2gk÷o+l(0k=>]Xd./mx[ ?7kEm'~ 5V|~}΢1#KlWdreSPxHk+\<2e1Z)&o g㥌PV,Tl t~s.P_, ~X\.-kQGϾy ZD!76+<3hՠL1!O7QrDdԮY /#}b+ AnF ՔAj-}R[bAAkt-goo&DG')H[s[`Vړ|e ASJ\ȉwxĐg^rD|QcdPUoӣՒ#b 1F2{ĶL ua[~ Rdc+h3IyJWV4$.D R Ŗ(U,ף\RCi_%X"YaGF蘚l=3m5Uu2_@?o"]1mVnͨV@O?o7OmvwOZ`M cgOvtdAՓxSj`MJR y@p:ddg"vBN|CH'' whDtQ'FACN_"#I8z:7{ZчzɳTHr`Ԉc$;g@¹=:6aٵ/Yd1Fс T/\UBJpeoxԆvf/ iXs?XV9gc˜P{M+-*J4UE m fNo_؁lAM ZWiN%x.G+* WsJ'*" #OhB %aJ+!׸2ƽZM$"cxTͼOT;Jpu}{^nМ:@48N!PTHIq6ЪD%ie`a|F--h)#5G2y7\dN>-ɧ2yn)~1xq_V] jA#lQx(7THj  5 Q6 uL^TTiWw&!0z_oQmAdoRoU#Ҽo'ݱ ^:Y:<XT- p쥘QwtTPzо"[Pt6T孲z^K3Q3(Q ǃӆ(`^j%XQB B ANV,4M8#@ bΪǘ}~E闥IIܯ߻Xleүw& }ofSB$V>{3c5oW\3 B7oТoWnN<Ҁdۓ&1Jv~JyvjMpFszwWW8VXxQ C H:jo>4(W%6tu~v Z˧@(u[~k J `wZU 㨳Ҡ ypvCC&PD~71eeIx8GD1UPqe(L;4&aH6Uɕ&ZNA\`H D^zu5PǍ_J9tk(k!*wj voM%5r"Txj*̘n}iP:ϐ4ܚMvZ7bfע<}؋ڸq2)߫'$=϶spυ$҆ A6!C+l=DTjS,[Oaƣ{ c0l4ΉpjQD3*jtsCwZRCǻ{B5ZR}{l AJwti X;1p~Mbc>$䅋hL r?MZ[.).m]6CktDևp͒Znj7U<ݣv EtrE=h-?"Q!!/\DsdJ~v.kn$N;:i Oxi~~vAB^.S( >BV֟w}\ܴ\#۷o鉦JH-}W2eMPK\/7kuᆪ;ؔԛY- | L<ɛ矮.e=M&u0_3&.bN-'wDF&+!{g}m6Z=!D_Ͼֹh#=EL%.g҇ZrŬS ד$PdD=~Yr?O9.HXR ~XE,] `,;,M)jD_W_|0>zALLe=lwJcX_\/p쮽$s- (iULaiUh>Z-0en=5[xp߇19 CEStLS1My8#j21 o(iч9OϹH RYbl%J1JS!4hO"xj&}](7DE&%2QxE5'y`dŬ6ʪsGd(x`Ε5e8;*LT pZs#86ASQp"hrZRzhDJ4/sE9~]/ncG@Tz% !)-o2g%2j\1e(ޝ` ^\2A *]BG T!PiibY7hy2zyާXRrd"<@_0hOIhy0ED_Pڭ ER`"B=轓iJ́K.bh"%Oڤy'uD2.HOPR 1f|MtKrE T *\3h--BArO,g3^>t%сFMQGt ke3׏3|%}WVOOʻ"B>7^D x{3 t r4~[.o??TJ8x}jͨ2|w!Ӡx A% J7#qU0b,rnw<)#@Y&LI~󇏹spZ)SПduݲ[Pl׏wݦ}x\Nngw뤐ߥIrW$='Yq`ܦ+ wV4Nhh& ]z` ,!V/rԫ/Ô0'l A? Av&-arn¡vneT`/nMxLm^iR=|+m {kw([ߍ#}쥻>5SIL%p{'3Li1Qxh?4J[AZg5awţ`2?ovp!f0HCFD%Ϩjer] F4=Q71B޼nwCM7=ՇU-"^/mf#M t3kẉei&fy[gq;f6I9P;Mm?84Z MkomWr|w\~5wPO ])\هa.A <|k9y:}46"kХAx.ޣT]UʶQVE@p_z>~GZyB,1@ Vvo Dc~EFцC: L3sAp/dO%8FG4֨Z>jר|{(ADBɌA qB&2IERF@}9Y38Kb ldG uX.zޗfaz_ܡeG,4MCj8VQN25!Zt 8HsڛoTLWn{j>H&t#1C ^438V:tacRT`79RŰ,zզPM5veMA!jo-|Nl;tV}%{]2 X:PcsXW*űXq!jdљԯO3%v&ƆȱSidE2[ޅ˿o@%#!ct㻎 qBBt=$yZ'ބAѵDqs696ǀVtq5|p2.:5Kuro"qDiC3YMe-Bc""RYHA]b=IAq oH1\b/o5]&?᭟6j`m=bδI[g&q'@Q+]mi}Yp;#h2$pSoLf}ٗp͆yw^cxlāB wP%`_lorb*zɂxQZJguH˕at+lSbS!L (W˺]>Cm ؛bO0^`,ƿ=*]l~<[* +n81GOޭWɫ_0I{8}PWZFwM w"9 w0M!íkVy2"s>|JfeC^%N+j"/C9: !: K,;Rr{IDb$%fZKLuD.10,77T#j*E~ݫMmOJ+ڻQ-F1um~_>'>]>|z9o2a ƖDGK"]DlF_/vIpY8=[M&/K6W{abt`.9e㈌%O.K^F!lU1-5uO{yXyњlYc0US~ У}WۖzѺrj1-duO=*p *7?ݽ;d!BO *4_(wK2-߶˫x.hqUPG/F'I;0~\mx1ZM{4IG/g$]SmZ}>LEj?Q\>ik_zBwղ*=6#I7uXzyƲ1z)5kob;@#y{r ]B8)$Ui}Vp)PTe@US\fԎws;w Ȋ4 :_Yֳr ZUrKUkH{py&y,ceE0˓9:߅:WcٽѣCW\ %'56u]::usYw$n9G@N˱Y2}qjb ;s YÆe\Qܹ|_G3Y |~za6py‰RVyqi qg(eDhP$ÄPeWKK뾨{S51a}\+iI"(dYQ2$&Q,Aa8H1t硆* M3"CR@(IS@(UxEŌB3C4AoSh(b׉7;o^lْ!gsٰ ]l @]6qF_7#S߽~ It5 [V()RHIX$<q,d e"&IJ32Y,p]NpO\NX.,"4Lrd!mQpU;mNxCN!2=*ǹ =3JBOUȦjo3%+xZ*QGYk([ߍ#=WWGkjF~΍E׺ G|c+ӻ҈}9]mMٿ?Y .';(8h0 $‡*xX>@+sB\f!8VG8UL )QK> 1dA$Y"HHMÙ 7k, >3Ws~WFz2$aޱ2s'8+7PeqZ# mV vjfAkϕxǃZ8hc3&k|.{r7o.۩RΤUQkn|j|nkwF[ǣ\tj#;aT'XQSBFӯ]yuD(Mq:`vˀ:C6wrQH"XNu|Ŷv3?uZ3G؀v:_ۭ³.b|p9(@LӟŒ$5~(uvvo'5YC+q6Ւ\{X#7uhڌD~<=h'}?keL};Tfx9㣣濏Nltƾ1}^~Ռ_sɟͽ{^+6L1 eGn{4w"1 @y-}cizp)JykJӠQ6:yxtF& 4="O>ya`>y$pNj*P;FW7MMv/Dqta&IQBMdWh$wwFl^JRfd_,7"Z &'9ωb@#HFMn3;gt^Ntv/? $_A; B8G+iۋ{H TGF(V㇅<~$=Q?~|Zt2l]Q7|<4eE ǽ@3# "~M?j3e TL.@)qrY)`yi`Z> is&߿m\SV}}B{`7/ۧ#Dp俏<9B?߹Ns9 "mtbq\ϗrgǃrWP_\$_ސo\v/j=[9?!c8MN6)KHESk Oh@\ 1X+s#QV4p!4 ?,|TəyD$lײuf廭13mqw[e)6$B9 .KAXƭ"j4=n/"P ` Lt$jE81j`IqqV fQ^gmrz^i>n߃Hz Mf6Uڂ![[dn !1,#Wvۋ؉G5tx4]3M,MW <t& 3^[PxB:zO5)gNTīVR'^wbe z|잋aV)gGBjXBY^Cn pYS@L_ 2n3ٝUFj$I4RZ=O}ʈu6X;zr N TI2N+f|jw۽K\ TU'QU'QU'QU'eU]Fs' `bp/;2%d)HH:l ^F) Wa1C͝|].h"U$@,^;k46FVb5=;V@QA{`8WdJ&GDnizC?W3ʯf_Ͳ*:紡@z4M&Xؔ:$+unՊV:unݍakd!|9Wq%+A z╢T_[ nL&@Jz5r0ExM5eiIekﲵz.s}cD^wUUJ(yAղJm eC*^ &AΞ4Ȕ#žOU]~wsLuP9tm񊱹j^ztFFvhyn@ekV{/k[(&5brt%s++=8꿋y3<f^Vy#3/{ c~kήiŏ')K'ħ<[8[ xIqy+Mt̚؂ٻFwkao2  MKFz_4dߏ%v[궪zLժfb,}ɹݤrs?nyhQ8ۢ/{p !QJp~d=q޷{h&p.[hy*@{IZ;ԢLJ_znJ#0XH) ^sFMe m;A tSn_פ.q+ñ;7pAG'JyIB*e"@PQynvZfZQx ,6g浊iɐB`u0e4F*;P S\FnPY1?~C l CU=ת pӣyصIl+z5*7bi{.dҶM_FӻiX=8 iҎwha?;;_:%ew3eN9#M E@Bs<0αBTe^qɍg 6H. #ݧ'Vy[T7SݧWt8㌃6E[AjcwWy˧.dނp"L Y$%k흋_-i.w s0ӽڂ-1XZgVW)1ʔLY1xTE,$VJCУZ̈́ݎ; :PM駯oV}-k\9Fl$>ԩ5D rDνwZfӇN,hA&D^2ND/cN Z2ZSTR.njPr#YHWwA* 7L7rFhJPWc^rkd3N@ \;̀vpWq:~EA*A7jm.2>B\5-~9i˕t;tܼ fܖOB5y*TpnmIF_` ׹6;#khxK$ze!gLqro}s+j=ۿ20=RI{n S1Ux.Fɩ2ʱX 5u(=DYTmĘV0#!/$1(ys;_zɢfOZС:MN߭4ntE r}\^㲞7t&z 8Z33=qJj(/b1"ZXc9"s6Shd+9uA&OHH4t6F'6Z 9蕎g930YCZ YΌrfl e(md9aDH3Z,gcpQi"9qhRd:TC,goVd#dJ39i8,em'Z-|Nԥɬ64Rm}J(BW?sD7n\ŷvOie*9IƋx\)M~!1M>\/BIw &=%[t N|#<TJ.T n6@؉Z뗸:!SdIuC;z^y$j-n y_q ~; .OXLTxgsXr}j~0f[hFn4걥rZ*!w_ }TgutNg,zr[a ?=W̗Q<o3@H`5` ,`< nuSKyIm?!af{̎ti΅:EΘSX3AΣcaE%-#߾fjNg;ӯ53_&7s^Ylg}nPY/m.'>uF]cq,IǢ/jQbc ln[[9ix2HPyrMZ>)pr{[#T9AȦb@qyiM`Ea/aw\-/A6Cm>o^LRĄ74 cޖbYRJ[iaiI (ʲ}a[f̌g`-" ͙a|z˂+6вx ö^|9|h&kl ͜J1|Ŭ!4nOql&Rho<H-jbڟZBazyw"ùLzYm=m Rdqgk)9@KC6T > qZI.*?V`?՗ys&rWgΑr@࿝eN)6gj\,𹤊\_Mx+]n|mY\K&Eռ>kiu]5 NpAPV54}B𺲒[|Uf +̶F>8~i foo9y6h [gǿ.5oO ùE:׹H:\z-b%VkAGF)qrqxKuɽ'Ƚ"twwwx.S]_nb/-7.ކO;G˸.ȓ_kT<@I4Tvh9Sv~]ʷH+ݵMSCvhk?~֎( mm:ЁzVsƞƩJ:HJ3cBEl}}Zjkۜ}b@ ۼeޝ >a0-z0oga kZ\H~ N֊?ºG "t@~ЂE w^-uo8jڋRQ'l~0z԰r1KzFaH`}lb-^5WnjF_OKmQޙGBXi8z̮ick.kIY9ꈙ7c̛yrر1oxo`ͳgc %cJ/k9{vL# yy_~~'xV8|MΥ{$ @g`szP8ǡ^dqA`}2dpP /$׌p< '5'0JK) ʙJM)|dQRKb)CT:VZ `mS)8%@"0["4΁uZ`D%eVK; :YNyDa!?RwQIT%XbepeH %r>HN6RJIV(fVP]~./"X)Mzu] sVTa5(FÝA mX $r;2en9)w`D-T[)H<2IQD_zT+Ja% "DvVTD"mJn6q蠖;vE3L(%[oq\ %N1=5iЋ`XΝG:KE@N!vb67ev6$+A4ܰ]v4򴒲޲ɒ)z-6t":HQA#8}sbc<5ւkʶIy6d`SM4kg'tk8jf:[ꮠ&S/r:FF32 .r Hd@ġ磀3ZI2(P+p(Dp04*PZTqE$%j݇(+K[.Kg,_Q.6,l/Pqj*njZ'_o,&ZH 7t,E:g {g=k/XI WI?L2XKSYK:e9d6ŊL^ڡ\^,e:]\{jyj};7t_J(W^e,/w5OzK2 Fǩ8B@؆h#R `_֊D~KK"hm@R  TlTx$*!Db(dTRT@e*-N"02% 2fVmȽb~ϵ%σM{|?bAHtyvS:^cjf*HA w6JO6s{TGk\5U,FE)L]Y1(g0Ɔ% )J4m?ur9ϩ:BbsFx윈g1pJ#p <ޗf emM~HYqÆ 1?t+=ay|ry1rؘ .?}"jt͓9@}vt΂)}3g¼ 3F,zW +5!zXGɁ{_: kjLp椹hĕRpK$E-]}D.pXFݸ5i};ׂ@**-cO OS:執V1u&Q“.^oKsJlp,P( s0팴([6ηV'x#cԓѤ8fѤrg|rR 5jŏvaMIwLT# S@zӠ`̅WNkaD1 !I4 kăF3+ _~J5U2SrTdOb~ TP=jع'EevIT,:`jM$8\G|#(}嚨if*)hxǐ!=B޸T 9yj&X I "ʊ-Iؒ8Dˡh΍΍CiMNHQOv 鞄yR,Usf?esw r$u-ƠMqN|&E"bVrִ5]K;[Kzx-[!q_-S†`YYi9KJͣ-ɕWO]~:GX->vVq2X(nD㈙{fvsY8n^ǚ(AkIr/z6'8꣞7+&X\Q0h?I٘\tGr5 V@7K9X1!ĐX{y{}{yrװruEBp&RAU n}Bx/cyD?IJoH(ݎl6yvOՋbOAw.uLiEW33[L4dŇ~>>`/} &.ޓrw]$xJsNLbUؿq]Ke=p^T켮bd'^3q16.%ȴ)qka}uU!/JKsݰT5vʢAEiܣh"Dؔ^oO)v(]v޽/fҢ#wWuV(Mc"+}~wȇыOxƇos==dfs<,a> xMMrd Ɖ_v?7fx'}L<{X̟=p/7<6O9_(aʇi|<8⸲${j,Y 70ӛ\M=N\ )Qh 5&w؁۝R^ Dni@op^C 6`SAFo9MbMH0oJ-xϙ1UFNgԬ= 9lfcp ij՟?fM"|h]w3ڌ2ox?,ƭKC\H0)qh";6ߢˇw+/6G6^ҟՙBu\Ny-Dy@4UFey:cX2 +8܇\?a zb PX]tʏ3Wi Gb ;..Db⌉cn>OqӐ_i-0V!&fjLJړ5d|(/V55㮀\3I;jK-T!O)b4 LqE.PFdL<̅ K0VqMI@W #-.odw Z_9Df \pA>\ &9^2-Kţ.YHʽ[Ɗ,1w1SJX;YEiN${zօ?ҌZ`ĻK؅,66FJRb6qZMOyC{OfŇJj`hYH֌i]ԱEi+E* Wr~TkwXbY^#A9])mK0_>O`be,݇5ou'"C $SQ,,C!aXM?Š»Tx An>˔\YcGH=v.hYhMf܃ K4?|?G:_٠~h=-,ˇy Aa Y-?ͅ3uvb=5X+%@Il}` $9=.A,,=XcLPJpnObG`ړ-MOM Fz;XٵVXv'&x[RB)im@_dɧటjQ<9i8z2@)HF&Tt3D+-'k]R"-1 #L9c$csʉ>SG uG #[ i#[ QMЃ~켬&;P閣a*{+ޣP=ά +@jg5n!ı&&'h7d+ Rvٜ^1s_ Nb:cLl`!z+ϩ)\L:!9p q~pSb0 :'*B0{qr-p.gľ|mqfʐ\?=+7#aqtT߿&_=b>!!Wy./A~D7!pO?bЛ[gj"&şb \Py3Sg)>%O1V0Jݭa6Ī2gGDrtq`}V J(BǕb_m` " :n5RPκnqK|N@3Oě2i,scd5qT AAC5Rb1~&A{0Ad!X;PaggC=w%9װ!<^rX!`Sj4Iz%7WkJ%UjIF,FSY%Tma' Rkrӎ0$IIĎI.>0"0%w JyLgY<~yKz\r%Uv;9ߠ'0shaYS(YlP΅4IzfJ~VBӉ,D5s72E-1JpÅ`iq <+`-VTc@:#@e,yd5,-1ƄvY=ms18eFY;+OplɁx}.2 >p帅IIll8a2oN5%T{>!8l\Xƃ )xXb D`݆jy•@Pb%Xs9+1X 2x٘;{n4f:MB*VZӡXaDX!cE=촄dU=('Bq`K ~hO'1YWPad;yg< bk ײP[5(1MNԌ|i8tty"2PI8Jboo-bRND&g[dZt#2 !ڶTu{ƞ`s|nLay.y_"Ր XPZc07~R]N3q3}9iL5Z-(]NY)rm˜R.s;uRZS3{$f$\=V=0l2N!>>Y>i8oF2]mƁw r.e׃d5գ> ʛjWe>eO5iApo@}(tCKܿoy\}IK :ot+__eI@$=|*:jyŖzL%a4T iv>>x9F>Rp  ]YsG+6X=h(ƎV 讃  j@hM$̪-Yn2 ^ hd?op 5<‹>9DFL2D(ϒ۳Sy=X!R?x<ݩAB ƸWvCۿ dHM.-蜱wUqÄ)EZ_g5vꞵw1z16Є8:Ή,$'(w?㩭l>9Z(th4($iNmU( ű[e#ftrx& ltz8J {\06R;$[ ga[9+>9gŗ3ʥ ]w&ߚ @{ug0U }?vӶEqv|\đP =Jg@XE? 6j|9˫O3r`҇4Gy͆$gu#Bxט/]kVŠ@{ ǢbVǞ$8%PԻxCBJspܘdϋ.JĴU(_߷5`sNwrAhkBkӻhq-^$eBIw+پ1݈40}jN#猕8VJu9\G_?9hc(C+ҧUgRzsVx1}i.cw:rWgfk{kB B[HyuͤVp3203Ħ؃h0,+ONTk)7RTTSjڣ;|pQ*Jѩܚ*#T3޸qZEJ֏Xio]NR?PS+txߘV+j47!$ߒd<ۛovٛP#b?]gi~_[B'xȍ[-~ f$YY?1ԘBGխ]%~H}G:t: TB~pQBY-Z(s[a 5Ih+*;^BG'I{j& J΃WdrVʊ|}X48nPݯHz╀'R}V`J&NZfJ:E1 kᕪ9%[PU>}x&mѬZz(QS`v,pүsAq1Z9$\/0x鍐d^, ;gE&,҄inŴO3u%e7p*i2xukF1;wK`CwFooAux"=K BxA$^1PӄY%kpcȤKUu1(3,07@̮ Ql;^bAU4&2t$@iF?d9(f< D/~ե|Pҥ84S chfxlHTʿUg^boj z,(!M hb~B6(A}RBߵW{sSJɻ);CyIhBVqr\=C\'/ޒO.uv1%}r6d 6'mBFjp"Ɏ ϨySKOuI_Ø2nT[==kޖ[ oˁy2v!/8fMRXjNښ'[ e]v5lg^b;@ap?yFxpWmtr>eW觨wDJɆUc m-aOitӺ]ՈN4Ob,T)VQw͔d `{1M}YF'52wW9eZ$gZ #(2j*J±:iF>Ky~)qAo&l{4d{U/ I!w!IB4{:;0tfnY} _A>)5ԢI3aܳ_ * {тyM5V:% L2b :eRe8 oЎљ:U|̳B\N&@g u_j=N()s`is0q~>9_??km~p2aQ EYjȮ!c+`ل XD%5>Ve͌X!%NIϕ;M&uLV̸rb0Yp`5TLrY #E'9TsrN)|F9rar?Mg|ts!>8isߢ)|XlҦZI7WܼY\Q镵QF( ҿ<j|Go :/=;xZ .6PC~^|640S]Xe3wMdz {v8{x_Q:!rW\Hj0:F]GBŇL^ޮ`ƒX0@^gdZ r5SL s05ݐU ]Eɧጧq,SORi=Ll)l4 (3^+8VQ ܴ!?XRvAMo԰ ;T 3)cUp]=Y jg;ﮏy;z 9A~p1(+io@jD["U>\DW?(;/G=[{ލ*W`LCl۫G/>7RC"vG;筤]S[pnH[c0CzogQ*Fj_;zx-N-H6Kk04&k>j )5qӇ pr$P0xڕ\eif, bLZe$#g"%Z3+4J3VBai_.+{ҳBhr؃ܿ9tm]Iފ+R mN!քѭ[M$ (?TX'qxhJCT1dAI|rm FB+b=o$ 'u~ 2H8: Ƃhu 9RAχ!k*h:sW~P{B0]ЧAXڙK{$$P9Kms1XpUl`nʭmȤw+4 yjCŹ ;KM%ugaiG['+Z)]ʹOik$K^#X>Xg)2Jj{PU!' :oEbܵ'Vݷ A3Wњ>HET{N#AC$']nLRt_.ǒ], AXWEڻЧ|}^s=#r I'}?zq\0E={?FS':2C aE. Z!FoIoBW9}gCۊ 4;?[.^ddBVxy R%l$*NR.;xGv |!2C`Mџٿck<{@bZ[$K=}ĆHF" ~`_ J2IbUH\0^{4A{k?Wl_xX UΝ*N_o|*pRxeGWkP<]( ޱV'c0%juV`O2srb|dtݶ͖;G1Uݻљ5T[7Rd/VP5E 8BؖaB۷n|d Cvurp<<^Fq3+[])j$ [4 ݮO ܽh:]߅qt%DF_8_ܥGUaTY =C嬄_ĚC47Уrp8lGD1ۏsb>,EتLJǐaigC3dK}/>0:׿t[mWIزҵ{*kx`O7]K)&֡B&[XlySqH٘~fT] (wŰ(&.lፘ *D0Z Z.ǒKl`)}ZmqIMbEFDI]LLyKdt餔0ٻ2t5YD*.uZӨÕV<)2;bwVBޞDVxC/v|c]׫^5P/^m~?$EXӇȻR sB:L0=8:O"e0hy1ɍXѺIHIj12xj:Nfib$:_̺JZtc(qo\ά4C'%KL)Q(\Km[QgwAˣ>C._YݽKU{Q xd6/ S:Gw3}qQw߽Cb Zp؎0s˻PF bN }Zsk+vNF:^8ĥW*TwY Z3iň}sވ 66\֕ciG(QZ+tGZӕI*[ ꒻VJUPaBE9nw궶L:BC\0В`"r ;cB5I%PҤ8oS,4"cSK4Yȧ"HS͝gX.cM/V&uk%_D K33Fil_I@S*jZh`RY[FbVs?hrt; I,/> B\U/nhə,V_!HL$H4VTlXm' ցdJqݲFA*J1% .SCR:TiENxt%,Yw j({N\nFF4P pX"NPBcUt ]qJѪ?{Wܸ C/YPUu*Bm>mϼxVQgcETێRxH"ttXWH|Ue! (fh&պ6oN6޺ ͈JCxN 9giH4a DIm@4ANJQFՎ8.(~mK+~ ݂F>ݙ|BQ aŷT%]`o''cmvd֎rn {,+c.Vl"pc Q*ZDi S56&# wyP(sϚK;IAʗ/q jʮ.`0NJakw0#hO1)GT̝,S-?4@%ou81XѾ&@M(aHbZ:A#P ?];kbVMTΡZ2 Lv^HkMe@tBZaʸTdBJ_"WZ*g_yO4" -A0]DU/Ƞ4S}s8EN~,\bBg/:ul?UY\dF+B 7F[Hq^ӠM60)щ n<K/ȿ.W}Y*0,0 b%N8N0mVs0: Fw6Xň!sP5NE[.$5KO26ڬ~2)b GˍߎynJ>;_{uU50] 9l#I d!pat R; QPc$&ldkF"j$H$Hpҁ`HZ*͈F9FϽxIGwD "y`!$k$PŏŞh. 0YHmA%^ZkO00Pe4JKS]'=xk8$Վ0ZpL9.H\2"+AJ&`[$0G6'i{!aL*5Hc;$!tsZ*\.P;cL s2{zeܘ5c zr04Rɍd3.7pl^NMGSFag)8uJw%S"eHRPh:`ϳ:FYL5jUj#<'~ 9%j΅s̅q ?c,S˳tVӓOי_<)!"%}7ӛD'N)e\L*/ /]͍)~7崽Y2jOJ:QPH%:#9@@@o~:.]i^j`2kV 53P`|#RUJՐ4TwR۟ ˹7y 7@G^'vN"y)LIsiC.&Ї4 N[edLC /oBXF_rti[ȭ;I@ *=7ARyo E%d>_y @p]1td͝p^lf_QW+7sfQrQ~f#Szlf18R48LT¬J]5וje' Te|BVxQ NdcH!y4O +-RsH-RplSxo .3~o E $+\VDXr6ʇ+Vy(P+a'L+X|^l̋#;"h/O,o}>afȄ D<ĉu![97Ȣs4.WF@׀.Đ2zu֤6D YM+eCTzS9k8}GگuOA-l g-F_;%9}|]rg٠Qn>ΎX:DIҠ)e2뇞{4yϟb"]3ܛhJa~Z`نw~/hRnru%L_~+~:*C&3yxs?H/)8Nw\:}*xL Xtv q3N;^/}su$ӆċ4Ȧ?+f% K2xG_h_Ճ!/]U=.}N۰ ~NBv?jOW>\lP0|މעY/u<{A׺=YC*Nns<wn64*I-h]GXmVDruquQ$* [3O;w0e[`Kf;yv\Q7`1JcT/z;hJFUvt?tO`}uz< 6ҿi{VDݎZIPh ϖBٝUWHoo{9Pl.ئW xa1#tkp*W\XhMIh˙»{$Ytw cR0 %r2c\-hܝe)@)Gkjs7i$jtRK4;fU'(L[rZx+e9$jO*%‰N,7eU)%Ԝj/4M(B1!*W?mi" u+ؒl^ZJ?TsJ6Yxn-\SZ!OexmWw14>}{oP7WjtQα?]t2aQU`Vh^e.pa" c?;.EkV!q]pmaQylN^^{tiFjǾmkib[(}řY)?HsRCxJrnq T9zY(&XJU>tHٛ,C)a- \"@n?w<%^3POF]q)kTk^K!w(1C^OJtv7zWO55`hO8Z` оtDG;#~:XGIѷ}y:iXGbcZB[𭩯t` zӁ*2&c8ޜdpTk"ϩ}!9h RW"Dudkc* P:x-e!ޒUr<znۋU,KV~[:s sx$vNoG|0|Xϛ?9Kʴ"Vxb9$a)D}RHOޖtd`W:@toʢ4R`( V vܵV\jʜ6ψxp?ߔ'\  swaǼZi{Zpɻ^DOg֭ R+i4{D1^S^> !QLԔM sAWvl(;q~\y@ZJh˝2qSPYHs 7*/@tCPU25øagkg ]F,.ir)Oރ0NG Ih%њUAhnP߲kR_H- EH3YJVJa:MUMJJԳR0CDE'k,vzyy3߁k%urR/N&ba{MUc\ؚXfjL"*!VV`-HrtMRqgDh35 Bsq o,GçN/ TJ<90׳ sj}p}Hу0?mQ*_?<~T)DWg"Fwyߍ/a _xh2- 廇[\p%߀_3v:bD?|㳀d+' wʈAggC>@)bVlF<P0v-Q>V!kJqQ;.e밨y'Đ(+K61,%WKF8LiÔsZ㎧IBGӖX7_\ac|[ÇGts͉DX8S`7#I/Ylq3a`XޙPo7`}w)lѭ$$Uݮi=:yg񾹞-&}v=G&%sख़Ī?ZXj4D(4|nhD1:pF#V猎!-.M|"`Q2I5Z@6O?f}ď̃YN=1Q\W嶬qR*1{OIG5lU:[ċ8 P :`.f!Zբ ](i0"}y2{FLK(0>#:XX sb)˒sVCVkMNGu4$'%E )y2EbohT7 {ٲ) ;z8 MO&/51X}d+a*d\QuiVsho|5*p&lz GIBM3PYXMIImAhYw_WhEۤ1jRə2JQ5,6@vɣPk>zW0grLDT{hjڠ'9!J%WLe:jU@ªKQrE&.kku.L.GiѶ)V7PLv9pdpmR{OT1s0_?˶_9=@EWtƄzz/rbj sj Lb^L$<{߫tVJ1UĴ7vbA f ]]͎=Rʻ&@Xow卻1!+XUWY86U-؞&0$Ha( BD{ cXZjaBf0`|i꧉ģK"lY*Pt4j@ry1+,A0OX9"[]/? Oγۇ<Ks,xv"ryxb qMHUGqI.fOijSCTk&FB͖ rXaIMRbRm!-1U翸",ҹeǼ}L]?½٭ ̥nq[_xe _:;@0tv{4bw/+x_w;unx/]O3sxUtүH"o5# 7Glݺ;]^[OCU-y}zсaDw/N%MD tÒ? ĮQN'3ML?o;z! )=vSx]_L=bJHnza^/v 6A~}Y=K᪸xq:Mzu澙3wc}pp<$05rtv:O'Ézsܮ/Wr74S pG=I+>H՜<` 0Ӱ3~+LU`y~dOQ2UStg&H K˗H>4JjjG_k~W~݁8meu"MG~dV ?$$Ny6TOȁ_r9/˗ct{]n@VLVZS!.RTMnCH\YLޤ1-d/+}*ZɞzCR%5ݎsRIaolW4:1k"&烱di I>yE{h^QE";#|'U 2zyAEv Ɯx7Nk.gO`1賾lآbfN wRo]s]޵s}H?^~}Qm_9ZY޺3v]C鵆H89ћ}6OϮ>|Pc #aS9= __^>{@U,lmƞ-qqߤц]F ݯ X .oH5~?f6htUz9),lHWGmE514HRQ h4b=X"[&/Ca^42 \l@ 1+,A<śo> Tn!8C55t Ғw!ToV@5 Ujx ј Yzc}QsKCmi`S @1+,A<g9JP#@֩6>#z[ǬI|:L*UnARջƬҏ-%>c> *m8-3O?6dn90R ӐKVG5 }Zj8s`ØO#[D'MXphj4yscu>pdm< Vq^l ut;zqnL1+QYa9̵FwlgeW6{<1 O*jMen+QpAU1hFq`c9A&|s\m!4 *Lym&hմXIU GMg|*JZq{ɥȎ{٠Q`m5LGDh FJ)x a 1BCcʽۤ~ɶvP݉5*ԩkGEUT3*)j}-s5$9+ɧfIfF'jHmc*[ZJk0>*ý+\y@2uo'6qގC (SSMMc6k`P+HHJv0P\.݆H zcȶ qR Vn0(bpMWНW4h 벫~ jSwͫ\P%8 5Ή'(#5jWFYVcZm& 'M4 C!PgeTP|`rJAFCY0`EP&[JI3vcinU9tFb)!fRBj]CnP9vLލU’JG0um-Q@MLBTL9SԟGE݌$'hίJPZcoCO]֜446o΃KmE9LTVca. JlVQ SOTDPl2lpHч类f5\XLp ^D-Ȕ%&@D3hzpgwm_AK*wYh^=U*K.VY|kvN @r,@b9 rIl?fzg#ʓ2?[FrNύ! b|w;bPRj#70dOaDCغs G9AO]D.9c< JPvlkT<`ʐp.Ί^viTJJ͑4|0㩭O0fzRJT%)>\ijиymb_]ePNB߭?kOfRE؏opsc 斖48bBzA:4$3!s $S;"Sb0Я+SS ^bUPTR2•sCBt>T6fb갊+ :զU@1¥!fz,5NɉCթ326d RV2DK4C=)VR:AȀ[̜r ;&~E]&aڽ&e@ÿL?.b5 oƺO8hֺ𦚈&qYl=npG :O&nqd3|ts[^^gX+a} 6&}ѩGtoGݎ?QЯ{ NttH@Ղ+nbQ/]$O뻊WUuŽ)=Zp ϕ/D DBT\:µݹ;؟7ˋf.Ǻ*OBB䆐.k=?euM:M_Ա(.Y#b]27`=Jrzq"sx7)UZY,C-'7Uo*=/0pE;!Kd)vFDDW/sް_cJ]v],ExKQSW2$,z^Kq_Zu~/^Z}Za_MXcuo@YL1Lh~Az>tp+zh@s#(^*K Kˌ13Qe\Wj A*bjFA!ƪ϶Sz/KIJ D0(h@ȫzG&Sf =vi6%Ȯ`f"\[=Fldl(_jRIꃖ9FBg14%S(uBcf ㏼;3~p56^j}e@@XJyinU!(_,RTWBEpն 35ߤ&ݗސr RONYя3A\y-T+#3/Ε]hmCA 9@?uZ>tjQlJ >ܡ=@\t1# ƀT+5}TeZ) i,uMNAa/H3q݆ijҧC,AaKaNi`(3W_Dc[h\!W&jpßf>g>Ҿq7bΣQ -3e@~`p:|[ V7a#B 2 X}0/"}oFZ3z+,t${Qys[#b>fҥ4^VAſˋy}xq7[_znƃ dsql5`s;/-Ƶ #ޓeVQjL)1)'S%JӁjEj ̅%(5CgTBqA䇈ԑH@I@mLlOLp uT!J [!E( ޞZKR+MSm%7+H,Ҳ . gpTH5ILJȐqbi0IBZ1AKQ&xyNJ˸V qbssVٞjJ 5>zo[ߔsur5WrZn?U u:|ZN~<>WN\]B^>O&]4xԞ-Yscʅ8t] wߴr0ӎ[5>66^NZ&!K Ͷ`tO_uMOYxT}4Gvt5GGI~c|yM{*>/Yk+f z7$C:bZv^zolBD{ʃs30r4$1rI8 ybS1>ySO?T[FrXk?VO]LFMؙ.; 84 w:sFtw,AC ;Q¡¡c*zqޤ"0$MX@ ajdxv8-":OT+I;0,M t?ݸ6,нu_?r'Xt|m߄E#t˳I5t~VȘyXSOf7>E;)nܵnFsn]e3uu;e1ͺuoi!m=d1?e#S gn tw ;.pbgz B OlV8E a=+)Eϥ|(gCJf2_Zdڽ {Ac!}^kЭ*)j,1ބk;WoU YTm&k/7ahZG9NӕYd.:ST $~&Vbox(q\[|2u\rx/ ]@r]·z$ܕ|wABUIt 'Fʠ] eQh/ZOdF Uv[%+UZX*(1l1zAO,ĒF^'dRm0t#h+Mw MפFuՅP"gCoo0}w){E\w/4X¡OmpyiMK¹=YP[]Q{=D zAȏ^2WVJ$So_j 0Xd&si⋀Mt)j|VȖN![#Jqi~*~EV.˾E~5+{%X-rJ8p,1Xאe=[ "J&5jt 2*]OWBU@ 扣kncI +wÚ5a.aR89f0)I& !/7:斖48bBzA:4$3Zx%'8MId/͙b|+w]/.W!)~xxNs& ??3ߧ8q)jmX]=S@Pp~i%yɣkZSuq+uћ:\o2qV~_wjv]8kFw+kzPo/ (QqGfaW%z*2J(.N,Q гQɦU'9RaԮ| hfmgR:|jsL-w~SƌC8 ns)F\ w s-UMbbś N,@n^ Q A>!Ua~w"> ;[LiBSQ)^IukC͑zB!ڻOM"ML˻{>8aZI;"|P{~3ů|\ {dj%S+cBƘCkn vߕCNuu8i,8K{IM*ߢϴ{߮ Q2{s ^-q'³}C۪6a Sh0s Ji/3^R5l~R dL'ՖB<~-ML=L'ƗOfW)jBRaI*;d: D>xc3rV}pT5HJICPMu$%/Fq2_,jzwWd@T7k-ЛT]vӺRKq"%QhǓcN*btw?F3r͞PFH 4-){{*w B['#:U8EgQ"q'C^ҜԔBfAeBhFԔfY-F`Mi0sr> 9U;Ju˙q+Pf5̃HUhҨ}cu~0&(|ꣁ 0FJ $Fq@6]ZFZKj ?{Oȍ_%hX$=E,Nܶ#ɝKVIաF-zie,:+,2 =+te *PB0.T(DTt eU;Hvtw 4CyXv:Wj"+bRRI$ԁa2jYq xBQX ZBPVu|N%Uӆ h Z|ݹGH{T W}bH^^ѤũK0:rd/[!uAXn%@c, y4i-pm WGYpU*5T zܦ}8=1q8m z 8q3Uuꃚ}2>x|c7 (AKu9",MҢ5vk^EItbJ[*W@ "0 2$تCґG_ſyߪ?h}jt43J,;Ff[1Lqֲ,9V> 4jpI:j@5œKf-ʼn luZ,QQjKe}xn/z*H3Ng +(/(Ji $2} @f{`OVXN$xǫc#M)5P쏋Tim'X%8VͬdFl6J}S"s囅=8'ItqMB)h~B՘\.y $a-Qo]Sd9h:KX2T0 \rE%iLH %Zo)aE‰ jA"Ua/<RwbU,~p?-(tex&,t8b VR"Ѳ,}0a֥_o[I9g($Z Cӟ;snB" A<'ZJҁhNWއy?]Ff'1+~>93Ū _-H7ťoʵ~G ;}% y'MRtVUPDp9gPq]et)w;ߠ҇ET@nB 8$֤ GM[CCHC1E@CBTS\)@8%/b/t& "8I;!4[]$+BMeD*]Jbb,`"oc3k9|ẀSf`5XNP#WpRhlNuϡ<hϡjP% { Jt QUpSѠr-l$ +yG}HD4H 0"a[!a4*-2Hht2`h8ęA*aKwr&CT뾬*6eqA4"@ӳgUfsFC`+`pҲ 'UTʝߚ!TԬHTѕda g04k5:iq0% &PA<>)Ҝw3>=6}y0p71@0B`,{fX\*!t?}PW0:X|Ӧ? 1 Q|~g3&d/LUb~q;>m5k\ʻvdψY<3dy>9k,zO::]^~}Ǜ9:&Of5Kk<d9@e#ܓ-Tυ}mv*lF:ok_s>Ne^N_z q\h=\wn#RViԊB*|ԱӢ[$Heš5prSLG7={:ؘ~7xO't^C1W Ldt^L`WԼmIE97S$,xStlB] zq'aҚҞ;=ruԂ0NXic]6[$`*P~UH{6g$SNOMk$xJ8Ը^JaLRJa\ů',fpKQpNj|r.r\FW D:ub ;aS@23BUgu2)Sj=wl;>R]7#!/Ed׍F3IHAkܷvkKiv[}iYFڭDօr=ZaBZX )O80^DHN ~K\4spNT/% :X֯7GC);ҴRHJ[Erb| I>8>2R)6{RhGHZP GD4w*)ip9p+D 9ʀ+)RUXfQx,P#4tTy_qNZ'yu1z$_X,0&0!Xm;mI2Dن.ϔUTn5P JGjk< K)粢E)Kڷ$ c9[x@N{dɦ(T& ғritJ>5B eԹRS9A(l\Gul# ^(]Aԡ*Bc %Z/T:# BT؂Q0EDQtkO<5^O9ϟxC*@;SE%JHԐ:v FyLA,ui92@PYSOWbSwCċY aGf~BaGJ)8pQ aLipYk[ŬwxY:WxA9 :q(r `М!~[qDTiǝsXFK'06E8|J }0>,j!Rj&mi'̃hK swAL<尶&At!!\D[ɔ7ZvhZ'nmi":mߑNhJt8Tօrm%SnWnhv0{Gv[=tۭrݒ)(Sj$'tÚ|,5 rXNC^bkB%4 0++)RP K|`h)=CX c*eڻp1`t A,Pg2^,EM?_ڿjrcdET1)u>gI.L> z~'bqw]mJқe\OdsE n|Wxֺ {hAdJ3D]Fh_1֬gBDr C*0 lIO^_/'J4^pNNRBm[Um#RBrI ]n5_4 EJq>Av: 5ɘn#]|Z6-h) Hrɇ1Bjݨ=xLN_^%֦ڤũ K0:I(J{Q +gՏj5vc> Pha|s| "[.R4aJ%W j2f%Mdͦ/Nm2o6Q~:2Z'W7񗾙(-Jk GGOG $ H=wOuM0"bGz]o9W|YQ|ȇCs8bs;eNtX>I&7Ȗld&EvC٭f~bXlj='_뛋/JxX;ǵZT1 |\?~m)b{7ه?uZ?]}iCXXk )VqA9} !"+$'۴Hd$HzCH$R-Q^)9Vb%Ji8jOT+us?Q$Fu3X8 3F?GJZ7{W6\*<Z; H{+Oi ?6.Ő*j3MPUDv #ٕ^K}eoe/.1nd$w#w|.j^^nW8vh&E{`r;Z}~]Y/|vر?>z:kDɶ.},JZa>} /^8/p^s/ D 5f5hJb(,5+ ܞٹOnRw[l[%{Q6}||ϫOgB y70D_Me8Y3R2ȈS+[5%RMA#HMmUd@"YAH]Pq7ڪ=\J@QU?KR? 4jFRݪwLd(jc@X)UKr¹]u>f1ő<PoQGȃƹUD)[)ٰK}@/m,LaրV]JuВܫ_DZxl}f]wV3_z>捘`C=$ŽF'J eOpz!˟FX@>xωhŧua}gXoަwmΠC37ddʰAT*dO+#%s?h܍AWvS֟Πw^{ђj41>U{\,_~6}$X ، ɞ;o%CPɳK\+dtEϔI Mgؿzt;sD:6Oĩ#DeL(jݎ^` TѾT""^8Rp&;eL:렕宷OD>4F5XZ(!%UKkQ33-f#ӷY9qhΜ\4hb,IfF#A/0A(E2^jU>Vz/^x{іv 3Ҩ±ʊbPE+5Jj+-[ì:8%uOm%$nh^趠D~> (##L;1;};1*q+ŨN+)VVqb4 k~ljψʸ~B"8*b&~p$ ~84S!B0Orvftɰ;+)~3  MrQ'8pQx TQɥT䯭4=R'#*ҦP9vWJWвKJ]GbY_~  Vsʈ B6,G+B6!o3L!HQ$#=%M;c7T汤M3D@ Nԛ4#1AaILכ4d'IDDp>.Jˬ'P{u"!>RlE4Lc¬p)+D*cKDJL| ]j:K}bygռu+e!T %;H.r7>7oyv;[/eoCv;A)"vfL-aח>z^mu8*e+?^$vQ0!"YLҵۭ3/",̏:iTl@uͤB1`C" +TM8QqtRPv[(L(!}Ho8_PkeeXr*Ɗ;kf!\ A b]NKULZG?>VO b&=s-A&r  Šlܞ 1,ާnҔ3‹(!yl@qՙ"j܉LC/e}C"~2w@Cp xV+!^ Ðyरtje8T̔^G%(aLURWXq =.?Rx(Ò\L(l-|yQ6Җk9ьZa7+f*ͤ44ɪzFF:*FNjc$-ZUeIEU"KK!j'ຮ!U6B duӳRKq]X }n*[z5pIX9%DX`,.{g􉨔T`gYYeڊk0YNgȗA{7F8Wx2o A\NHm%Mtcm8{q;)3I9NMe%ܔDmio||Su[MAt sOզn|e Pw bLy\vdQc"%vM܎r̎q4^/5^\P;AH dS"94b^IUP *:,9> :咰b=5S`|vb͗F3>SD*!*m0k[/Ъ22UA5T1ueڮ oƠ˩>&tO(ry oFDc+xa"+;jvʜ'/oVvƥr'EqTіLuQ2yv)Cjk6{?? }]>ƞa]#n Oz -ꎻ]΋ϕz蟛y'Iї`m9c0?jb^shP[4ҋ}+ŝ'|+{BHqSWɳh15BdKٗ|FP_|Kz+cүz*C/uvn0m#)`G;1hp hROh`@K{pr.tvɥ U+]"qxYkY<1eӉo?PsT֕9]''0O" C_}[V?2+]nԼa?2kѣ8vunxzP GG~ sjM{PP+?ۍKOicCZ;:Fq^)҄l a ?8k>ŗw&Ld |"Uuu31  }]SLx33 b *?t8pfJfw: N<`NcβwTPsk)=~Ǻ8lZFEG6`pʸ䧫䛧{dN(sn;{4B>7M}w}z~5_9.5 GWoǙ(-6oL֝&A{o27H⡢/fB[W7W #[ $(Җk9ьZ17+f*m,hDU-e&  2nE]^󪄓voŭQTsR#Ahi@UUVF Z%U/9.K剭늠JRU}%P-0y; na*&ke8T̔e+QZ2~ T(Ÿ>jTdrO 9 lݸYmZm[n>3;7kXMmkx^ٕq4BoK޼B?lT u˫]5?r[7^G ɧ7IҮoWmܝrlDj#~y$䕋hLqPIh7([*BD'u6u170ʃI#F y`ݒ'rH+pm_E/Jˍ~g?ށw\x'?|z~?xy[?L>[]:=r(jmz\M>߆p\ qC8.!78i]\TְS# +!r1n GׄP]M@8aFv)w}nko uM@!>;pJRdw$8%?}+]zq޿˯RDT%IHhKx{bĕ5P΢$JKF&5Ubܭ8,M*P(j`=Ic۶ clXR;NְCf7c{Oé?rd:)!w?'? if$cyŇm~rAN"[]YsF+(fPsCVVa+w_1P${k R 92c࠻nx/5/2/G#劾&je{BJu)5y`)80;ELg>vʞ7ɤr<,_ ƞJ$'R Juo唿sIE 9X.PQ9 )yf@VKK6vo\L?`7X,o>%`n< F"zO᱉-YqRjh }"dLsB^#O{:J?$Bfu%*S, J| 0BTp^ GΈ̗M,H&C~Ul)seL^Fk42M-+.Ph|.@Hz8_^m\7ԣ#ٽGri! f, %RIġHR1C0A28"Wx޸E^G[p49Qܩ ֨]m?{x'^3,Lnꈕ:N5AeX1>Z 6cؖ?yjVls0:7c8O݃Oԕ[kltT,4Ef\wIbwR'I4;K@dj*_]O(qɾ`J)TKVmYbR]fՖ&<u䱬 N%"JkMR3ř脖vj{vҳRX Nh!owX:R~&sT(q%ીqAB&0,16#(±$61{Kx rSs}" b &Cߪu0~ߎ# (bX?qv̅3OgS}jsJgiB1UO >xa|4hGb0.#`LܘǤ' J.I^9¾<(f$־}&QoH:JfK =нj7㍣|UkĜI9 b UEQ,3E&u+S'1,έt'tV)J#}VE; -.t1VjnaO? \*Tӷ9{Wnb.]/ v Z|u]ju]B^~uR:S^g{A/7JZPZ: ;mf:X=:dQ)܏RED+wpj䴾N ]¸.~i\LGp t GQ?J),ҜtiJBث:iXONȳڶ Opwt؎m۾eR:s8o!m&KEwZG1KQT]cbyu6n'Eiߓ WjE7F'}ĝ`eLpMr2 }wڶ\hӻO^zi9 ^{V3s69ڍ RGۭz5Bto0ZlҴ,oÔmDi-|yq0hsm$s`6VcTh>ޏ6'c9,:a)YsQ9T \˲9& `mfsĦg@ݴnqHrћS>ҕ Qh̾Q%eCsJ]77+ţhܕs*1mSN*΁v^m7ii_&$;=A)-JvY-Jʭ(tEccz9*2L؍9J}}g قv>cc7sSF|LAΆ0LIw^L;E6.X?jn%;dʿl|YۇS: HABFU#' 16O)媠/r"kx6MUgD oJQ{PV9rXWDV|?_=L,|ɷlVOrөkŔI±*k^\g`4aMvqVrlϤAj 4?\i`fqZ|2F vu#vj_4ۿ \]zlK [.G#/m2ҕx;mW7.GvdXm6ƊXXhWBĄb[dLIh G<8dQ,X8S &D%NI"Ŋ2c@ EJkcb&p)y 'ah(B6UId+o; /w6?UF6]fZgO{rT0|;χ= y\rm[rIM<ffҏ~>xADy;=M}%|!~9 E;ʏ\{)fH)TOY)F5WJ񴤄a\{$Ʋ.?Nׄ }O-T0BRrǶ䦅%:aG aN2,n] H aT04L J4LE$q(T @:A"D+#B q owV:sdSeNM(٬8xop:f׽$V+^kNӰfS!mvDb̂ZÄz$1 ܚ$ܛtZ%x]8t: ???>K'`,V.dWFe1BWP! #DUݧ^\5,CM/eQjV,Sj!ow uҳR&ݴIR~B-=Dՠ-=?-oӊocKi)!(JPD$Pb…Da ?Øq'T T+~Y.E 4LL%?˯zI:6fP hI8ʝ~"P IU>z^zWgl2Evֺ*_K5R۷c@%&C>(‡E<$1"A=]޲{y2I[K#rY _=l !|v_1'#I\ l5¦})l>$ZVF Ts2jS|UZلԩɩd \i^L |n1eB\\-mv#'{kgS /#W|4G?\pd~ÏO}g5j컼fgD !R)#D1SIA "$qDl1)>r! oMHx$WEo<[ e90K" O@H-&40+|G;$ZՔTUŷl\,K &p"N( H2L DT -R ExB1N)'[<_ljm¯@0շϨ{pY;\}}FsDmJ?VXd9d)i]09S9̟CЭ\ " ( NݺW(X3xuښ}ow(M}uVK!o DH+g18ݍͬ"1Z_YK~֖ku,Y}za~}0Xc^◯{+~uɨp)DVoRTeHia(Y{#V]&Qw9IBr$z1)Bnk?,%X-xs*ssd@$׬|KkҸZk|@($Y^VWLpG&8a\; ;oF&ԛ̦Rez06&^ⳝ{$ _Ą "-zݻz fWaxe;j}DշT&80qR1UN vUU-C7LPNX[d$e)%&ֽL)Yc'VT| *Bg 5)5o.ӡr] OP"1U{rԤPJKP~٩;=T~uDYJɑEY^nk,[dz,eՂ/&=f|N"%&KXnhͫ[I",KVkT<lld!Eּ.)Kk4A2wUE! ;G2ի4laV;1l wN9,G]0h3FX]Mܴ%k] e9?gn {06h>ޏOC(@ rD⹞0/ê)̾Q1FfܬNWAT0/g #=zTK`g9 F&.򦤫3ZlXd?) k9jQar3B?χ8= Ά0Lz(USш'Fk$.Lc$r PV_gv\&X(UH>ӂK%NPka¢˅ U)5@R!S2{ӷDMwy|.բ8vw8A4Ȅ`uke^MkmsW;2¨jOIشj jj^^|AMhšnQHÓ=wzEySmC.Im2 A~1gSԤbO)G)\l,դf`/T1G lh@H&QO92u3z8S$336){70k16 FfKo U) uP!ow;BqQKuq4vU}ep(eQra)I$ wHF!3HEae 昈 BJx0»c?`k5VXaH52FimxgЁH(%'<-U3⁐`d(5 [dS=XNl&8r֜{}]87Ǭi/W^Z҆Ra}׸`_)ӫ֏ݒ۷OM(,$#Lok={1G;76y;ZɢK)B p=k*yOY]>˚Q cBٻXrW~Yd.Kˌ f0*9c{KUꫥKW}")xw At{Kcm/ C*Uka\l^/wӠ" W,Xh3tHU>Hv8 -Eզqa礥ǧ*]rx.#BKmmmsrgs,g2gcJ(XQaЋ eAvjIuiPDT#oB18 {UC#y|> -S9i)@FHt;id1JNa22zGqnKeғQmxZc*lDftcr52rcdRf4VzC?ɂ>l 9oq;*sPcj( !N$ITWnccX;#Kx zw3oi\Go41c|76>ESoD=&d8u$MNFz 좋Hd. 6W.8^y*ʜ7Co~+ 봩7)a5m{2cumFZϮle Csisf2}EP._ MU]- F]Y+i-[\<458)+K |D>Tڑ۬l:`cdM/"pƸEDQ6f[_ձnzw*=gyi;cœR6j%<-ύ{ !t e\%-@"\I 7)6}gLQ$A!3$yQh{Z='I#F{ Sab^gH~ЭBRow6cr1)`|٥Prx 0n<T/>l7y$GH9)]n${<~>0+`L;I"Ӿ\Xq"'krJ A!*Py^HT\ τ+ʪzmdx(`vQ}B5gO}Gd"Ԑ"o+,7i,G ˘ZcѲ)--\K~;~HJD8^xO l*د,kC܂h8Timk̐qV J@8~Q",n"t6FQ a\PMQJ\Q Ďc 89(Iv#RǠ:+xA5'-=j-*NK4P]T_RM%GqTȱki\c8<~\֪bd\u (#ҚjYK%1Z Zڼw8-MVzXrNQvS}B52eOZzZ{2}1zZ׿kJ$R1JJFtk+yť7JHnҷ rJZ;'yi V ʽǥT#K986pއӧ|[ U'ۙ owUU6?5u_+śryJ*,?{?ͮgd'f4hk.xw3I9u7Yq?Y~"t7SWOPmmwoW- Wq[~77hg:{fl* ƑnfsO9B4+L=Zȁ&(tt;"_.Uke0I=PpIQ <|U_>NԙJ8@\ܲ+i9:ѭllփFgt"qq.{A.s03 zO-qZPDLNgSرp(ɪ-$ G['{+w|Kn6F5{a8}S䫓dbAF5M:d;jQ2\}򇗣}{L}q%Y]I.a=Lj~~| OAԟ?5Exޝgks8U.JY;#*Gmp9/su)T;\^NɎjz[.FOھrS}9LR1{U _f;'/:92-[<^;dwG8IjDzW!EX@[/x\r\d ViQ6q _5Zܯ9Μh*M)r(y:43!W̹-Rr¸BwowwbmX=JMZnњ-9Kُ_s>W m8ۙ3o_ YngSw k49߭)"Q yFyi`%B8[Yx˼+5}^z҈^5ic$13f r`zۍF d#zFs5RV,r@/KT.8A1F<6+0¢sۆhl&q c7.5r{XA)x-b*"*3yiC=<<-m_ <\)rVE~$( c~Q9{7xR*m֒o\(aiׅ[\-v^0[7;9j9Py,'^N8dF}N2|9^ۼ[Ǵn܊k{犥@p^?Og#2az~wyq7'C=+؛ lWvcl!qgmج# 6-X>` l|Hw}oqj7>m}__oˌh*-l1֓&[%]}J+[q[eR\%`I >xp;!͋2aP1{$2!2-) ;F׭[֭-bT;R᢯ SinK"[UNAĹnݴcnmy:mߑb ̺Ώn]hWѽuJ+r8}:}w]ОptxprGQn:\,^}/^|d!28c N8c8cL6MLX8ce,P+#{7k}s]^RE@qqWr{Qs7'KkRʰB;r?Do%4J$[2kBdxĺX(RXMZ yV>P"!~kJE Wԓ ՀP[k+=j\:[ER*4箴!$rͼj &JC+#rֻG+5pVowQ"6،G I{i}/x?F!8~LT ?O> Rcm#13ɭEvЍv(,~j\ 1H>%UxHtY2m3BTJe!,(*ƔcdnUFh h 4&SZ,4r"tR[U f@F䮚[8G[O7 D_]O6E͒ɯӛ۩jVw3+"r,A( ?NlFEgҰ ˡTXhyᬫN9EyQrƶ_LcRB廆YGHZ)k+/IUX 9FJmTZ.ckR5Cgj$^vdiQ2% gy\l[Jv.Tk;늊|MK -MIɃy}NZPUy mtr:wX :2mxP۸:C+盡9KcbG'uP 5g }JVRaG׋\g \ϰL!OBaG/ܡfm< $UXvr8"4?[=.t'Z#N'zKL1pt呾cFۆIzs[<,h8,>E{TNdA%\Ώ^se4BJfnsf b3 H:\د9> %-x²b5Q^qB1BUP~T{FPh<4h㩤Js ױ~X߀칪R$lx6Ǟ\e&`zOOrGbK^gPB}9yYz]Y~VLo'15$ G><ёL⍲6Z}]Q+ o+,s;*Jx< ( mlXu-fXfA[f 9JeteE2s1[Z&s**WI^@YT%q "%Wy e)KVRwL$Mz\eU:^y*sFhIrV"Ajȹ)2kyVK"< ʘCYjX>~ֹHLX b-sEy%84U,x̃M\S =,:T )0 |ݔ̂Uk3&_ MĄZھ2F!Z uHFI2af1ehcuhkX+*CInKP_hGM]5Ah㧙6z[)rSn%s0Řw;_]wZ*3XWXeKm3xOMc r&OqŶړ{kyX=%wj{Knbdfӡ;*{Dƚ1-7]w~Dނݣiv.4J6K>X$|rzi&!#wZtzxiҲ $ ב{=zp)n&I%"8l={ȝ&?C)?\G}#~ˉ ьcս2ґ;= xtxmTR ;H%uIa!0`&AD)D=VŞsS-G Nm+Z:;wFnc[tkAhk>l؅|pmSG׬ Ztn|*Y)*0c|;#P{=(ٺ,*;1]tAɍgLI *[PN =so^9_RbVwn1u{JЭl >B7%yelu>5Vwe25A$?.qIX*OC_yKWՅcoJ^Ş.J@H'd>K8[Lә*?Lk\Ev$|Yv%J1X$3%\u_o!+G'y0+_P™'/BK`眾vd|vmwӄLH()P)Y0d* aID+eYH)"a vѧxdj8cqG~~5B`@WT&֣q5!8kX?^*?~{~% &_:+>]ٝqӻzX+wIvGf4Oԭ^}~r}ѝ^Y,憴JeS=aTʳ&`4Ljt{ˋ^,M\w7*-o Q>Mfe6(Irfm°cl|_祋EV =S5Y5ֵ*kf{]' tcIJ8"yqkaWrŜ. j|+%KY2|O-]9 *9d[|K%j!{5xXeT(A;;s^wyFcmw^ .!v7kj=P=?*w^jfzKztSrh"i°pBs0"gLV5*9i)Y7[N #nj9q@}f&cjކ"&r˙&Pg踼NB?p6U?}dcq#zlL~+P L+:Zh簞6Z|.a(7+nGDC10bDom4{Yo"zXEƷ7{|Bdu”re˜) *i]u䱪FQ$:M  (P*SΓ\1Xu TjYR_K妃>kbn( 8l];w>8[@ˮtE+ Z*N$D!p8բ". zp͞^cw|EXYƑO%~Ivm|'rLQ&p6^ZxzLY1{%Jc:F_^ZdFrP(l/qP5]cl!Aٙ["*΢Ű8i}֋F+"JPD0FOE[-qbU6X|iٙ=Џ7$8rY,92I?DY/Lht9Eؽ'D}t%cQԆ@A@ l0G*\0<Ø0 .EL ]:hNAq4GMVśdH;?Q=M3)Yx~√{ V[|۫vhGcHW0w3Μ?ū$57J`G^Z>2Û#r22;Z{.69_s罒Vr9ZlSwisկ!ݔ 8PVUYgo\}ѵ l۲c$`s2?C=\&)_]E#:(siU}Uۋ hڰ>|YiZ,wFs)JJ/8bJjpXj XfOSCYuƼ^C= 'Č'wi:S*x0}ү)Wz;E~MZG6"y-Mum 8`Egr ʊ뱼AlIpts>@Q86fc)Ҕ@A\ b,ƜIǿ{w+x6Ԗ :ihvZAZs`P˂.}h xk= |Q@Ҙ$6w?tΝ|nipwŒh|5BZ#6FH%7 /Ra#hfm[0Q>+dreOX:J! :am c3<¯g!"ƀv аNaX8O8aVH!6wzNb LFG,'"ރ9E06vl*)$XY[)vVJ@6I}Bj+=g+Uϴ3xWqgR_IMbҳR(JJR#5JJRdN nj_j@T}VJR(I}Cb;+xܧRlgFj vOl2oMn.3YW?$@l<*ևХFP܅=;wSm[.*j[gۼV_(e<*ZK-gm敒;> &:)(x0&pRhg:OqLj/z#  1LR{}1`Mq]ԍRL{ 257c.|Xjl n AD6JN `f0& }08G% 6g6<l0<آ?TQt`s د'@>W_t䠈-`m@ɜB0`ƞ~apϋkaeo ppӝ}NVdzs@V(74=L`@|΃(ZB)w2ށd;w;C? C?a`9 \ b|:4H&#Vdp{#9Lh>;TpZ8jh;8BMl(9>?MgixP~9]!n~s-4m]%,sfWYֿ͟Y&B(b)|ҘϕD<È'B+@&*ey^Pr5*\N!R?"XjyW` 3@SJJe T0DAY菠B 4R’9VK`b𷟨:.w8H{r.r_̄-{9rvVK1$lBdQl^O0)#ּKar_a-Sm|BKZ0o̦TnK9giA&2di򨲹곤|WRK&.)bQX)vVZJ-J RBJa/%JK%y[)$% b( +$2d̄r 9g+#R(bmIlmQFʧvovvo얋 AG5L2[d D9JRIlSy P0ֽwhyk)R<[:h@]e-Քl0jbqZnzu#hB/vo9/#&LtN8tagoY9_P1IIVg4vu Íp5-|>8^:"p\"_Gw4y{HӥمЋm SŪys 9cT$Jث'b<Lj\ԙ\î.v'ANmecIf%R"(p ?G?㬞:[`uq'ШVkרbNlRf2j:ռuyT󶩚cfbMK;﯃^cm0zqmmcs',o){A'/=-,Z׳[s2y}v'9}qg *W>HhN56CIZ7[(>FvՁTnO-|*SC]qn ZP |T'.픻q:kibhwaYίwz%; )yd|WtJe (??'X7`+MZ/rn&wl8}FCfV;>^]Ӊ^7} u-"yTaԆvK6'׵M \*2#\ef")t_oڽVR\꿟?t8E"|,N*S|+&[|.\b揕ekaE&ewUƂ7o$P Z̞nu&(fqZ}g#$>0|u˗R#dܯŋ}o(_,&wFD"lQԔaB)I9F-4OEK;#qQtk;8d0WRj# Dz.0&I#pn*wH9-lWgO.|}/0^]|:x.~4@B c3e}]Qe^ѐR(.dzXi0-!0bqL^6W_gXJ QdF(Y/uHbiekSL%2sq=[ߤE[6HY:P)S ;g0$.Nƣ`M#ԾXx4vF2v1ŋ WK)}t袊5Cw.`*ﱆlHN !D b3G}X FTe!IP٭$6T Õe*IT2(䠪5+vV!D1+!TD+mʝ{\s)[#"4 ^*A+%-|-;09RmK?ޏzmfGX2FTAl\xx@TDqLW~k$MV+!B޼_%zU!B,9;!#O1fu2;16ߗ"T<<# NGOk;ux:>^=Y8]-HZpӸ2)!tKE"*^Eaop̄ u{X8-Q3/5w4xu,ک҅h<;N1eebABN!.(PS%F(Ǚ*sQL#L t6mi-x:'RSIM !.iRg&=1R3c *gQu3(2fuRxQv)5,v+$8vY1IKk`И>&sL`W^/K;=ftbٶrvoD7|hkk%Bqn! c"1a]22`bVIe;xm{^f2}dh2Ze!ύ2wIb9|k,f"7Q~mg2+L[<51$ MxcITQ'n}8K':owt{Lo>zxיP8P£7GӼ0>QbwRDr^N=򔻑aH<\Lrv9zhݡ]{?I1G7.8p0,!= 9;Qaqs9!,zPR7{r)!n>Q(n8{EJL)گ K)~t.Ī ~ rq7 iU`%0ppeL.Y My^]#B]K&ߡ*)bRL<$ٯE;t_ot\5~;ƫJO\ M6jiWx%\5@x:+A*|r|el 4ŸKБSD< Ʈexu,7 ROefpZzYјM[0O^.tën{0ч \Zm n?O==-Ms]F1!"3=0#!"P.JY"L$5BIDqnOۢ:cmi]N/[\ךMgK/wQOG[}끆~%CxT=p0. P+_jJ"릉˨LN&J4-?L. |WeKM E&2E$&a3GKG F7Nb3>oBC?GR)ѠgMFLǒS# `RRWJJ*BsOk*KMAIYEsS* #r/+ƓvT Ggx##:a?M*nSV@+"uT U(YZ&k b,9%pqi'JZ4mD=#(#kRfDR B'eFAм\V*J\IR2$ M> D@Db-n-{do+lZ%¥ɵtE^%3Y!# B JBYpkjcdHѢ=c,9ѣD vELimCedAhU2v̥֚۟II4ѐ3 #1CG#LOޏLd+B;3_DơLdS?#>ٚm|3>+5V BvK@]rQ-/% = A&F56Q##`1>' O\f7KtYIOىBD!K{<7e)7lS-S4!tǛYTl:aTU7vd *=Ib5OeƖ@H!|$mˣɹ҇B\aQz n^9if%,zFFD=;W8F! W$BvoF)9cUsnIk*Sta±01(Jg-kF54/̵O巗ґ췯mfg1. fRGw*e㡷N+FDp ie:t5V! v%&X%̝se?, :13+n9_#P *&YNLQD_킮iDStq7YN"29Zx+hR;q#+?m܏~r^6䓁Xrf_7%%Yll3MVW^" .@hyVk UrRa2;*;@ B'=Dj@91$गdLJ鞉LD߇tzRZd71LJ#)әEn|),3$RHЯsWXjsox !574@Rf5C 3Dd* TZ+Α9 gkNАT=j}"4,D.e2*b9Vh0,P Ģ6N tBlOLj󬮕Zw*0zVjm8SnE J\siUT,%),\PѴ0HL؏&HT $a#jBȊ GB"+JWYY7pʌ"ҍ*Hk=gZYSF56nouV)J4UOY )-4:k[_*5<Tݤƭb{KwV/LV*Dզy!$7o]AP%vtI ]* 4(iK*F:_t~2ӿMt,Ϣ8%(#& C^YEL_RV/(}vŌgfSB'L_O۸ԅ|68?6˅\ոy5?6 8PSsDPWhO/paݕ~O$Ɓvx}0@*B >2ngz?KzxW˧_x^i1+AW86&˧;v2Zp RqxwUΥX')f:Z0φϜNAPs;źNYV!oe!ŵE2K ~㥄½<3召KOHHg^² H{n֍Η\_=YZ YBYOfQ%sFaJaXF Q:nMV:Zyj6!)qnj7l-^ЉFvzjCw-|WwƘmmZ(pp$=ug:Z=8rwGQd~~3KW>w!kgk2{nBAP indQ {.X{NN$Fk^T"x/yUkElKel_bgF N(͗)r0;=F$>ic QDэ#ZqaUc|iSoj2iW˯_M {5cT2'±rV$Hw|)r;+>[6԰[y gKb $A=/́?Np8CkN%DwԘdM2mtXJnj ާVTc-=d|!9zpm\vdWnGmb]C v=_^SS&yT**RPYrc")ʔH)BHBZO<@Ah%{ߋ$+4炘Bdh4TBRZ WBF0Jve@Q-u?u )O*4oa.W9)4&2I{PiYKy3c2b **L B8ܽ^G/)nZҽ\C%hwCiεqQ)tc*1b cȈ݃P)!^q7Uq|^굈!%R_^v|MKtĴ/tp\#߱4{f?b%tWA) jCVC.7C$ᇞ;qH~&sJKp=r+ bRl=5!J=uTNT<@"KS4KdrtkyFJia Bݘ۝bTxPoJl^%`d(Y?4%4,υNfEs#3"`7L,b-SQJ.2+ fJS)RV r I!4ˆ,:g y&\{3CɄP] ImHH;&$@I==)|'$+WGWNPw3!aiS}0>I~rDAJ N,QwKqXsaDmhAMf"@[둲PIR)22HґP)BBzrND/|G{$yM"7M!$XǀQb;94YQ$ܤ"D$ *sn0 5Xsk08D8$JNߜ7ZbJ1|;T3lONtzQgY At^݄#n]<ѥ1J. y/QD -3Ye%qٟߟ;)˳_o2l81$fl87v<.Ϙ//F/M9, &[ЉFvDeࡵ[| vk!5LQOLщԈgE_=󆭳s0Af:0+:pI>N81ef"֞,<} AÐ=p"kžQ=}d (HKjSO*a%mpITS4iwk?_ZUO:J}uy_B9Dc0+=j7B%k\Hx%rE~#(װ?Kb8ๆc|s [@;h DŗǹA:n}"싰3?Sw'䤄kk~- ;r b'c.ުU)SoV}n- (>ALGrz0 I:ͳ4|: (bI@9ߚ9M>=a1 [_X^I=[?/"2)/L\2n4DȜ(!RR$qԙ\3_xFAYS@n͍EvpYH#3Й[أ.H-==^heLNE ܯaWNE ts@or݋.P G7эDEaTMYJ3e \ɨTe;r-%=rBRDG2U7Т$hL )$"<'+Jr0S!,x,; 8-<byRdPyP,QOo0&KzP/z%pIL*2(J}LQP)xQhpEy]uY${En;CA+=X尲=>; _-K>nWs8a<\;IߋU#LpX}왳ہ ໫W{rZU{t5Y( 875k7QIHOt/ja!M4Sm=a<)R3G=yi(}ij`O(=jRR&u2A5u5FPz(eL;r5 FNtD;nC)Pk; h6FN5N<QKۉ@kTOkT۽E GRZ=7BJ9TOT+YkF~B鱡 ?RQ*RD)~(-kE'!J9C)'NެÖQʉJH9EP /UPZR6=QJ}=-~I5pB1RҊS=Q-'(et7S=S8=7JC)/oHV@)W~(- GRPސ*&RjMt1 gQ*$t:JI7J)[ CCN5j GR9g;y} ?e$t(R )AJ{B8ǍR9$JrSQJ/O$|̂BtHQJJH PJJK5W']z(Nʉ)8r%Tw(R(KJAZ=K闢ZjaX|SPO[\yPx4E.BR6ua2;7B:cb LTb qX0Ǖ~Wl'O}޵5qcN)/վV˱3 W[1M1x37 Mt7)-8،բzuZLn?-x3UT[(֩NPƸɤH91F? J#jGρ.D L"V~ C}Vbf8\nXe6,7 YIz}3<~v}&`ΞT< Pb5éal4ilD` 1Nibh 8$cbVR]BhpƋĽ Лy,4?܄:/,l"4h4i^WMs3-9vjn F5\9 9)JX̙w JEGlͯs ._2 -/]zzMgؘJ [Zf5s0i4 ŚWRUJA. ,FVO\sj6VzmsJ(640Gh<40_ bHt ^JkNjC1yzI69fDػ+~s~]ѴUDS"^KF(hw0+6lDGny$I:+g9o߹DwK `&$qbLc&>7DfFfĵ*O0T~enym .ay_i--K"2 DCpudLwׂW?_R}la8p{;d;d+#!XC$\C[WkDp#;eʭ;e_>f:teHnT@G3Z N_s}k! HMݸ,ב3%==clHU{? eWfnw?{d[UtZb Owj<*2blZIx.?۵hWZkzņo6YS lM:2;Pd-9?^)f{ dw_kA`eq{~]x1̏?|vq7=6ۍqosdz,@AـtUp(}M (o/=;bZS&Ul:`ivc9Des_b~|X1ŵBg%42,#e\vc NL6"~1g?I[FYؠr{&_Ydz8NqjvmcX:<|Z*j !DžӒu%OnZiB^f|nUUz\ RL':ޭCN7-QRڙwϔJnMX 7,# 08 AQǻp[3<ݚn{~ЉJ5 ;廫_R}yb}yT5'`cq\)xY쮯N9nǣUŃ`Z35j/οF/>$/^1.|As}mh&ğhkzVShTt!O{Pm6)CQWmJ(=%7aɎlTKdFs߁ Ptx~`7z>[,w}߱ߚs^/2 B i &P`-ṽ܎a%1G%'@g (%Q3Iɒ#8j$Z$g?MBdVԮɇ:6"݋NcѴ{AE`jv/% u^h ـ}@& ÝW]u3FE}J6)ִi %GI-EK7`,s[VX&^IHƊ8 0#qdavzJ! GO)BWl1c{eHl=k ¿Jy4@;ʓHC%$*2d60$>` 0VK :@D- 2%wB 刧^0fT0RM;>܆eu Oޗ5jC$ł>GzCP`=[zR'=eڅa" )9G6Jʹܙ )w m"_àx,k"Uk>ר75x%xq%|qG"'i1D|<C}Ɛa2_w1wKfI l&^=pi2_ ӆ55BTIWn|c6|94: X:=LJs`Zax;fJf'o_s Cazf؃0+ `7\\DP )^ WPX"WOLW$Nx}X\N81%5n7 xOFG|x>bAǒ0I>}NGm}‹(u^*1ިAojF'.<'DNI4@UB9 3Nw%[zV澋̽s[Ma0Lh Bmto~I*81j!jTT=bՇB;l8̳hKz)K$\5N̳mg{'Iz9ʥ~܎t|:^~@!3 c9(L&*785Z2oYsٹ\GvToɅ??A;g0Nৡ!aP0s0/Y+"~?ۻK)4s47 G(LB;Z3ujL;G ǤfojBK i.]f5-X%=ܑj=8j߷.7IY-hDfefu#g҆uU֚UMyV]U(S :%?\ d<%?L:# //7ޙfV湉 lfZ| vR I$”Gzp [HEpFj3©7aZɨ?% %B~Rr|yAuO]n0 v4OQStQa%`bâض@"Jlc)KJʵ !J8g^#k " ͝GqJSɆ`  [) *:pT!k#[PJ7, lq"/ ^'4 !q+^j 6&6" b`S$Tqk ' -Gڸh9iXpCU1fd/>[OhGρ.DĸSI@);=ܐm3yk~Ġ6 `N>Pv.i #'r'2$i4Zۇ:bT vP T!fF: _N $Ȳlu* +Cږ}n$jÓT Ep8-F^9?X*<:Z"[(} yFxqaiO=78XCaD j8ݰFx@>"|폅|W1%N])t8lM`vƼXYp'þ}IۓhQs Kz !->OB %V}^?#!訬56}r^@KTTIx={(wh^ Dhy4&` > ysP('Ս؜5Z[W;8^0@ Phs&rPJ5#kΡ+šFנA!ijMV{l :̿2 i@Z޽Gs$Jǔ$ H=P8HJHhp^3n/sj|> "}ƄYtF*A:KΜ1\+24 'p5ChsP䐴uvAJbn8^L`Sh vQ!La˸P"\s` *g⁺y|6]G~,t~k˗gO 6|k˯?YC&R}~^/??cSJ,/y5v|uf_Gg VXT3~OWg5'#ogӶm\LD!v"FBC;j2qwa!m/fB b,z mHЧm4~N ,ml$qc^m^$Q;ሔáD#D5S]tuUYq-:3){:k88Z)p5id`(kG@S[Ԣ cy/߀mKO K̹ p'>} /dtbȽ(@G/:sCö탶 kuC^[Ԯ3,-v.7ݜMoeSк)F#&NeKVx2O1зn]+jEn{sjnQFv-Vv߷XRuTKnۨ_=OsXnŠCޙpjݻ7ẇE9_c2^Ѯw+6DrIhm)5zBd85|/aNT Ӣx@Δ#v/.qa.Hu]/MuT{;nmO38aXBj/`O4ǂ  IСfigKmko΂^ &\jP?k- RZfLH %Sb:-Fvs)|:xY磟&ZLY>Ks:Y\eU1~{}ց= 񢚛FY=Rg$_aL7fjEjƟ.UNU |?!lxWgyOfw LI1Pwjѻ:n Mͻ_vn] C4S[eCz7UKBuwxe=֦r-*ѻu3 enk珕uy–̑0\fƕO\AI@7e{C=t /jp-lRR8|j/zh"}Gu}ۑJTX(CT^@8Uܖtw.>cwjVF~t?I6/O/:5$;Ii;$ZnbtuâZ 'Nk]5<38UT yۀeg egvLsBY=ܜ;e5R+zϋ>z/%sFZy7vd{WDr0|CjGfZx={ 8mvP3kg<iix61hLHR j2Ⴃ JFuB^u&(l;KؑJ՛JS@ؑJuWTڑRWxGw#wxIٶ4~Q\Zr^&j?ڰNOW²ݥ|:Ngowj1rz[e0,ʆ3LP؉S˅v' MFOV'c|R*z̽zc˟.֟.]͘\w ^JZ ;:-:g4|<`ǽmf^4x08#G+CǶ]$e[;u2#VI>#vEҞ9,F?Ʒy`IIBهr}جmi7xzmQ؅k2Ý`f7"s|,q'{_qi]k&=y@gG-FMmUk,=@jjB.eSjEj{ G}xs:oYaPԗMQz(u`J|<JAZ7Qzh(e D2@cR_6688J/j(3(g|NR gelg})#)SQz( XRjPZJm #JNv PԗM ҃F)gn(6.DA37VRkyRP*ddB JavQ 1?d3v+ѳq/RQ R2ǧ@cR_6Ư+QRho;wKq_i Vޕo?YugjpC)6>&eSjAqQMv֗*1 Jm 52J1e6i<1\JmOxQz(R~#@)P7Z~>QQ+! BˆԂH~P!TiHp*6^܌jF󄰗MaQ{pLNj`!]o.uLb|um"~ίo5Wћwx6.)ԅ &1Kx߭iI + h$򽛤Qb$] v0dB:,r8Ϋ#HOՑY ft19e]0 Lխ`{ΌbDws EZwP~kh߾l)lӐ.,E&ҐoS̹q&󊎹B&K@r d:)iI*hƲBBU+lD02!3Kkl'l/q&bg&)M?{;j;k.h.9/9jȮ߲۬)NJ7q|,/bOߔ(-IxN"VmVwFK:T^F?4]u4zw4Fs_-ez³lֽ[qYtn-)H1p_[ml׻?"SƮ.g,Nyy:w'Yiȿ11p!P!gmKS%v{{aC^E-򮱂| @^%xKFex[N}eSj( DǛ1rE~d׶1ZR.Ga2A N磟&ot0X!U>57~S/BQIn;tfMYD`kS^eu%̎*.8{Lg!w3y3x@Ofj#pamgލw Ձ tB}Y68vK:z.!)q>n=zP@'>mYڻ_qn] C)ǫ ]R~٫U[,nGgtl8{GGn_qzvIiykggĉ^ڶ+J>.eSj#,3Un(el5oK}ٔh8?lrC)eOAԭRjQz(e `(e Ԇ}遣TJEYDSr1 pCi)kGJcdER;((1ͳi~tosnji)SRV'ԽMމfWݗYg5؇QwaI flb36ImN8 tHa1X[&0Vn'Lj(vkC cS 4 MO ƹڜny-:syɓ] O]^;XH.d  \>QqC'Z=cL 09-q^xJ혌88ZCRb\ z6sE(n.\nv ]KUHJ5)i ,$XjRT3TKY&"8ʰ~uu#F >NX7˩sUIV$!JgT$i3S"FgP, زEb>yx0Z&EhJ8BD8,RذKcPa GKT2Q$er@QC8ohGuĤikPp}u˴2TdPAM?*@Ċ 0x<ᙦ$քu(80&撃V.ȡ݉U.Bp ΟKc~3BL &N^LrDfȲ$wL"!r W);yuSubձ9v}z q)lq5N-9v1?wwwFQA9CސY &O7.dh?RȐ!,4 0?Fg,yohK{eyC>XΘ:2 ?/meв,%nUam9_. :O6aPIP\ OYXMyK@6kB&uwb.0c';E)Mg;Ou ,=iy1k!𼨶[)Pcv $ͼOS=<}DJQeL."D"5a8 cÜ:)" -Ǎ H`M?$)6PVO>OF,DOs$۰L4 j3ŎܻGR@,bqАYea$OQB#%+ EfnP+-ʫ"7$H.-gMU3Z1YCC旳ϚƔ5RH: (֘˒6ciu8]^*#e4G"2jJS)u;#X9dAN0Fc$ >XR%QTm IP sH9c 4DHh(ԅHEظSBsE !hCu E ]7}.Irc]fp8>ZyV7Ii?/$8Z~-"O;ueM6>rC㡿)BW`ݡ"NkxzoIQ=Ӂˏi>Bu^>B++9&3$'6B`jiȧ #(BBc,:v:dbF#GcXlo)˵9Ԩcq+<i)\lfd˻TWC3 ڐWy$ֳ̀Lbq-e *aͰAĵʡoL`r;:WW mƘT*YPUeҲpͧhzg>XsMc›n|QEaX mE$.5Q'B`Q5ZkDĚ t*3$Q,§" ^s8v\"`\J@\0(du0Fh mCj'*mC@d;{43hA z_?NYXDqmDlEB6Oq}qh0>Fq%Dn.K,7k `& y?anZ#Gt0-3>;[ʹ-"1vӟ%<$nh]d?}ާCۍ B(&~3L¡|2:f:C`^ D0#|y0(W16X,}$nn~Msׇ̮+P2/} v0ewܹ|3O9o׃;  p ž=&ٯiw|ln;Y<{M&KO}Vݾ~z|%t_p;Ųpҫ׃}'{LK4+71Wߛ|7Gyp~LW;noЁ3wn^/D振[ϳlo>gQ7Bߤ?:Mw ]pAػA %o՗}Ad<} a MNmҾ/wU Av~J`G^eR$uvFg;GWPe#;{5rI/}$pgN' ҝ|У D_ET=]haLO$WRRCo4]L ]zՍ7K?A13⭻lϼFvoUc!NR;w+7Ga{6t+^}}n%͸{Y_u/G$kt<@|"߇C&/3zM{̞>_NL!dZh ФI)Gw`"Qd3쏠)~fyb/a2h|ロ3~ g~?_` ޥF+[ҹYw+(L)|1"3溛,Д 7kYG< cZֿozP=J$; 믞x2;lY[ٳͼ\JJ 'R-E*s ;RէKO`&l15f=+ۮ0Lg;7Zxe+b5%ׇ7s|H7ZA42jf>~<_((ufR|6J(1CR$ G*4` Ul87QL(`c˥:)Ȣ0:x"/ d_8nྦྷ"[RCcr%$ھy?ƛ̟f~]uR[vy)Rw^jzڊ3ʥl\=`BP³c>ks@Y "#۟kNwL&cwg P©f[gfR60s,T_5'T2J}6RXp!20L>@"^S0RB;KW:w]娩\>Ir^U,gdIŦ˺˻>| :(좾`UڅRОw {F@J6MY?>-$[#0z~q &`go(-]pwϼWj&NQ -"*9o `חj dYYdR a5g ̣m ;/;JjubWČZ%:3>ϴSDC{ ]b>mYtFe C?,6`AT(#TD# 섭ھXٛFGO"УK #?ѐ!9A0ȟU؁FcPQl#)g[$(T3*i"8)~`{6Iy0ZEJ1'Ս4ʆ" TMX^] \N1s8)k{zxPk )ǨCJAMkծRˉvR?۠i㇕H% :v~ [JtNrp.(ڰ7C%ݔ}9P;fk=E?lg.0|^]~!܌ {Y;ߤ|FV@%A(8,=> B ~M/WP )>͞d aѳlyWa9pͷ'Ydn"pΎ3/N:d\!J"w\j;^<{0&5  INZv3W%$]8~9ͳc/)Ցo/YͽR3{gTKΐ%3d)@g*ADvp{snΠM"ʝIcfRoe0=r\7#l%˼_FXnIl"2cp)1% ˒mI)g&p: KbeQuTk%5Yb 4MR4s?l>+mX)d̓b *k ,SMee Dcp@]T_^Q #kp!T3:VdsPj\ah*~ pZUZ2=#q/ &;*97L]FBsVG6\l89sj{vkq.IyJv, u_]G֨H@~ah)lKf*-6B!z|)vg1 iEi(11rcH>p<aFZ)7J?fwtIx 8բ$_Yɵ*:=l$@7]{@Q3)Gp;oQ(dj ß4#]d(Cj4;u؁; h"AjRfQ@pV , %If%Ը +E 4:`:eZ2jkidI-F+ǕF; 6l7t7@v2rw .U\3 uK$juԇ#*B0mmOsp&|z7̄]`:מd{ &+)-/en[_mO0?e S|-sݼi WfY?5gMoDc+#I߼ ВuVcLgۥ]D?7fh :6,o -@uGLJ#c+H$.AeA77UGEZCib>Gn^aEhd~08ֺL:b_8^@/M- /кytW1r ߟjaRL-$ RobhK YFJ=/ї|/R [zMBB㙃~\NvʂnmPvG }vw_IpHLYQ_ ')eFZg+NvVQ6lHKɷZ _O\y|Mr1S7wwA[S$|+75$pd<9MFIWqw5F^ifaAG%oțQ. )cX=< 9n2"sl`D86wS7 l\Jp25ӣ7 DD Jyrd*.? ΨGIDKbD ht: 6I}xp&_e3@CLeT(3yHI `r#$ej՗9#rB:g'>EJƝ؋mTAE/[њI:Àt>ŨNH- 9HŲ5KU#(1tKqD>`dv\(\`#yOEkQsC}'dW,7 i!թTJ D8\s"ItـKYz>&LZWbވL:(-0qL9йz:ΧP"MqgMT!&]`#mr=j>zɃfU$kMg<2jVzB 8qh *:Bfܙ#~8mRBq0yL&n3K% +@`@\+L:ɏ8`!{œXf6x dF3V0dF^ՁcZWB˂N tͰܽv+dXepLT$OoyEc Z6^~8Nxi~:@Osw!#$}/L>מ^+o2M!dA淋A4.Q@pL@֢0ۨLr@k;q%@ ta$Py.~["Tx&{58F,qa*) Hr%f?ŔI9bMłi "ܞ;rKXuI7b. 2(OU ۊQvtτgFwG Ѧ6u`uJR:C C*Aw8ݎ@ %zM_Ǎ+:[Q&: ⡾5@y t['4dƻ^Prͷg `I@9J,\TtIxOγ b`~ x?QG|~8P^ >hut{beݡ cGq# WvJ4 S栊D"ňg! m<sb%ݣߌZj!4b)Ud gB6%LV!9P|_9OL?KS؏:.MwWw2o)pبskC9dE V < s;+QN*@Y q;"; y?MsAq&R#59Ca:FcIC)\Xt0>' :ѥA 7NvjxYӌIR$7"㥴ɟu aFe:Roȍvr6eZ# ^)pH0ȃ"伆cUӢ~AlRHrM~뼴=pU!ҫ4JT@wK(e:`E_a]Wl?1qmSUVŽiu5hT4!'0|u 1H`2Mnљ+*&=lZF-`UdžJŀM AkNfV;@j羾n)yjZJ~0jcea1[z͕A0ƨ皲(!򊚬e fLFK@r(iNXk3HBMmfCdQP9:Vm&1 9J|&H4m1LI88%}TE[Q+W s8&/e с Ȩld sk@p`N8i,#! :UD)a"bYℓ$ҪYn|w}]M}'S Lw|O~I{=OZr'6aXWUM-<}%BAd1r҉*{㎇\! ?3vI#T{w@fN_ۧ[~8O=a·R,m/BV"Yb(ApY˨+}z!*LrO%:r3QKwZU ވTE=N/.ゑzrWo?Y/;9zȡbᏚ\U#,)B id[ZvAI,w[2۞ś~TouLsxȞN\pm/Д# thK"RM)&3ey퓖ڔIgi}&e( '̘kjvdcՆRsefP -5tpm$ _U}zbZnmFqR"o&72S2&P*p{LhgAt4(ǩ$z( iņ)x~R? -`Ӎݘz4fWAõ-ZȔ(/IiƒqolJĆq}fd-"* Dnof*0 BN yȵCsJ#NB;ƵkY l Hڹ)6f躝0;y7=#-|GBeHAy?z& wrD;ե/y_y:QNqS!RRuHsts@hVmtmօF]c%OfZ#=9fBEJ"s r+ʛaIKnuq9ۗ2:iLV~<05˷-a4>H oVy$}4k%\W6kw!*bGpą mؼۯ (, Xz{ֹ"(+_W=UQ'ue՝OGq6$x_w쇏QV.M6=2_$ǹB0X&bq:Mk{@&(pS r!)5E qUk ] Ȃ!DI;eLGًw.ŵdHtTEGZ-&%+9C-*BdKaՙ믲gb-\oc r|ȡ4ky1`ki ilI9*1s!c5Z؊eʁA >P:{}E 5N:[yvQhwg=i V~G+j íؑ1 >}^Q/ah;!j3vaf's5S[VE0rl5Nk޷8 GuF+duIaT@weq$z<""3`6'I٤HJd;lE쮣Ac3X]_~q!M*r (+Y\$~~cW/}f3Cք**:&R< (Q em3({cGXjzWfg6GhO~ɩOJ+wEO߹^$1l{Nt"ɻq7IDFJþ O +rnp GO"/:SK"d)JA~%Fȡ )7O fs\V"XRG΀6uf{nSN(Q+ZtBQ< 5#HF554hQV, ⇙CPW*Tg8l2ɭ U֮BZdJ+GN iFcnٯp]J|̀=+:BCuf K%]`3+-IDIlZLj@mzֻ 쏙 > vƳ|ـ(^dh8 W[}[HJlH> XG&bU;\m1DrA`{[7lV[w$#d-5e[b`=a} ?kexO;y<'g+2_|_iAœ=;ueP$eqI艌Lò 2Zs1'|Ď>9%]]HV__z~t"\9  (  ukq6E Ӌmm$u憎?Ge@IZߝߙ"i%: Nz^ v*BN&hZe}n}ϔGO=aV J&!]h+Ѥ:Fa!+&1,:."c̞AL"hTd<2z ^![j.?M%jI9߬\ 9QK![[;o Qo[%pMLlno{u|փ.6ݚpgӭ7:uBHQfbT_[E~!Z~B55?nx#jp$8vVh}>b?Noxv8Nu!l˼w@Єth!U0mn#d E $|MgwiXސY"=BI`ꝶJgy;?>\ξa,fo;-`a&tƕUxcAd0$Ҕ=˾=&oɂ"nRckر-!+Q-ٮ"zy5N}OmR1$DYȓ)*YDF D% *IX 9Z9COKf|l i͹}O}β6,]2}0;S\6,UKVlDX0[qILؐ^f/5p*ퟃ_F<lQCyttm{#ѡdZYQ 6U#fC%朵%06YDI޸hYcL;c+&$.R/i?,2bzVBSAŬBR:"\}Ϸv[{jlF݃Hxu֜BhU,ZX kQ=7]B60Vm® VS~rz?j n1Tǽq@ʏy٠/V\VIM*8'_OxkƑXQ6-mf@"VXT˲>UZĐMg[=HqYw&(t8QPxֈl$*I Y\f"T7c.rO͔!/nnËKSǡg:̈́14jns` Bln3B6Ԓ~!lGxۆI_ ;S+;r*q+5 ]c:DOn2+7w A3v5Ouqq_pZ C q0^ë>1{\o}覼ݩlr}ގgqvO.5 EmORwߏ?LWg_?ǿ_I!j &VDSFʔc5yBLBN7㏏x}s5{CluQۤybu#Ty3sXK5҉*򲙯c꬧*ܗ-uR+G 25Hd>Fy=Iգ9oމwGV//h*xCW oCC"b1~n.1sglcbAnH0$wÆ?~<]v ڤy0Y@a9YRv,ji5 |k[〳z0=>bh|ֱY w]72 Әd%W]6js^z\qN CEcdz? 2&g=ܚ?k4d*E&e9A˫N /A]b/2d+ZU0=m{zCB!ͺ3 Y#K2qTzK'tL) QB办,$AtƩvO<$_0vFȎTNJ'f-R*fd >9q['78HLk {/BA^#nIn@ тJ;Y슟Y^,bV󎶽r={MO}7@^X{nY^N~؛<@%ge֬k)V>L!P4{OC8`o `퍅ރӈZW5_{veA8lC`iL弭 MGDUҧWg/I FVOڧ//Ux%ޅ֕(:z ^$wfrgCaƃ|rb ٲ n׋K' )FVi{IR I ^ _Y*[:ذdcI?,F.Bt4q,Bj-iߺ TD;[D}̱ztW3ś_X=ֲ xSߩcYgIADEfQ=^'>VNu`s"e ӓΉyhSEx ֆ5kh-(Z/*{ ~#[4/&_]t:Oy̫ 篖12qE>{'wc6C6Cw#F1Nݳu+F1;%J2R?mQF浴W#x f6, 5ܻgv.D,u6];'oEV* H!F7)ͪb * ?W=ۥl3<\ߙ#kĚSN!N\Ԣl;(S:m=5#Oe;Q)5M(f2Ϟ  I @(Z_Mw=E;K佯T v$c ;F7)@ 1dFGH)BdLAaoϞiI ZwѢݪaim&)s~Fd0$fr1tJ,>.o^c~n6Պ,Ŕk*ϻϮnϸ3LO~%j͕l !.'%d-vH:5{NĐ h 2,&p0 r#E۪K!O;uZ+ "buIG A:HZSKj257NH%@@R:ۀDJŽٜݫ1A NPfzpyf+Bc9+{sy,Yȝ&)CJ|"XRG΀6żfq6Ϛͬo#=z~:&ZZz6`pf'C,xKŵ6\f5,n{hiV@x[u:o)eAsRJ0Ka?Słg弴)+HeSԨttŶhTD)4{%npz39 >}wQz2e6-[m{ +V^ܿ[8h3G 'Rm#Г?W#I1yK #(%4Ɍ}Tejchf fy:`ŠÈVfGvOxr5QwQ["Fv[ ԱwʖHv{ "#le8 Y.pSnï6qV6<1nK%KW^C]qƈ G[N} ?Ɩ`Z'G~ ,nÞވrd2L޵^;n~`x1'fPcÁ7"φZo4ʃhA-T(bQ<hTD55,WM%+acw$̡B5K!+X"hAdQ>7:+q#Ř4aA2" 4af0X ֘8]jb2UWJ%Jrl%)+3}Ad|E9+[*]CȞyDL_di- NH)rZ).V~CTVn#ZVM\?7[̵ڴn\V/ [E-WvӚs`/I#z"(QLNYu L ٚ 72 IS@OڴRV58_,mµ9Gsߑ"^i? ]4m`r+*@9#R1{D7ΞiX2;HW΋'~!%/~pt?"#m%Mr~wGȀMd(Fhi#|UEGN0X5#d SXH ?m96UFn=Z|1[,اj:]]}!!X>lAu֜34۟VrAJI:,„P$W >dVS{=o)s{D: IMCNO~ΗNFLi9f)M4Kb 7[!vZl[ڐpKi@ A|ӻOw}?ئqDHz|.YO o07dYcrm߅i޾H^Gh[0^ )^jmK(,$DP$S6'Y 05ԞLkM4|cbR'J̤Iϧ]jmUW5/\GTn۽`?kI/C*aNl;k9Fђ1A DIəj7jxK؛8pgT#hh3Rת c ko|R1 MWoS{Aݦ]FI;JW~&Jm@γxR{5Oc]Yo;UwӲf Q鞞X )uH,[n'e7-`/kx|WC=(yc3?,mQ"NuԩJn(f{tߚ&3@MP a>F 8a'GlzSĽ&6U䤷5BY1gcdij) ATJ"(Wn^շmн'.gϗ5<"_MPDr,*J-S=%ϲӬ:[\'gnzp_mTJ__9{j ]Rn08(u1Tt.=-)=]x暳jϳ#wˉ>UdEx.bhr":Hªbkff({lZsp@ˡ$LeJ%3g-Xv6,WiulM O^}wVGM4XY efnM~}Y^ܑ TȾ:oLid\~z]V/T-cp/ϳss:Iic@O߷b[T9H^cCj%j6Rv ,p+ssobb|)b ZBu_?]ت1 +T(\V\JV}@'ӄy& Yv5 $aSQ5%vqRir*:'" =nӚˋFww v%r#_`)!W%E MYQĒ08rxX=rd*=?HNiz3N<ϗj ϾVbW VEGڸzHN~j^#M'|O85Ma5Xq Ak_fʚ켑K%.%-a)(#x'.gi֭19O)\BePjq9hЭTɳ!۩ǽ"PJZM]J F(5mTVEFE)B52LH1JħJhWsނ)ãI] 9f:e؄hEVY+)FG%|&fCf6&0f.UjW;aл*8-+Xл\<1i#p#pLW 1MO%]`Ngv$%)Q-&Z5 >r:otj-l]sUŋ,.GA8J16[p۪oً ՜?[N!+ژ;O|})~`O~BtgH12 mvD$u;'P`sc|/6nFta˵Sf#27h:M܄tgAL!YNFCzoZW9 K8OYѾruӷK2[֏KupyT>^͟}^!tJkATS У7jp 5qdܤ(`Ay`{ Ŵum hb^5vo!Yau$6(OȔ*7Rc6h|1\mrωz15mQJ2m5Va|֙7EQ։h L l a{`߳Ӓ4`Xݓ0b oIi4+172!v{u o^3i*0J!Y|o6UԽM_ZGsF'P|T9:D\i*! tJCY SLnLꆿ|UFF ZP,"2YGw[J߭$KM^3[ʫ,kTEY .s^[J%=Z`<*ڤm`$1I -s{Yh7Ԕژ̑n—MX!e2Lkf^>gv ye1HRU#S.Q=;zJ^2d%;]]j0d~hE[ 7"gBcgYV~AfAqK]-͡~*㴍&['Z=lY:)KMb.A= .X-r!qe.-ǓRfmb QCx+5dFF.dm:б]e Q[]-.HSR RW/8l0C7c?|= gR[.\k/&]u=*7>,+VP}" xȁ6Gi ϐî"]'(4u'%i+$A.O==N\رR(NXܩ+oTp'Ǿp2)1T B*e`!X&EPRbcƤX&ؕB8aWi5ַDN'rxG4JR"S-"T"*PRnM_yi[&\};hrx~sȽη6xl\xCV塳޺%/oI &G"} 9VqN;1kUȱ!_y/ߎ_?cTp虲 GsCI^̡J&PCGSM9g)G ڨazNէ_ޞ](j1L^ۼYW"\{/&&U8wVXvBHp(8foU^0y |m@k7;v>BV7ڌ~(zlT)b1NXޥ3@'*fTcVKnv{t=IJ^^~]6y^U/'?mCd ][Ubo}n(xqֶ˔oHIV:yj7_T.kkB!,ZU}/cD]hTz'䍊\К=|EYs4/enf63tc3 uh **9sb+$=TmX ,8p`;m\MD߽0c‘<>z[s\ ŗ~߲m&쑀z$6ؙjr@ ZaCˋ5?!YUs'5"/..&t4M!$tԥ%{ |ӳvj!Aec_WK_>UY1/ʢIyĴTc6)?.\U"S(1]ؽ=SbP7h61<҃{;ZKۇN iA/^8ؔstQU)Sg쥀]!Gs'&Fk:﹠|M%Ow-}QMuf&ĞgMV,M^FAea|Bˠ[i3B(YCAֺ[bsݴgTF67Zd 3"x<2E;C~*wB@qՊ*G>fBMKJO͊^[2Tie~Ⱥw S$O 1 t9 G@+{JdUxǓx [4JH\JoUq)W5mYZ!2V)Ghx}G21cq#R勵$IBv'0eh7~=+k;It |}ͫϗ^\}h-cgMD~hV]_Y]ȉHȬ~NX偏{ZOBp/Y!f$ťJ\|PBVTD"[d gqt vC y`z e`D !ȑbfBɈK4f 1<߃5MuW=dADv=UKt?רly9}4g PC[B>]{­apaY bnm˷J[3y͒c/d ڮ7;es]p/dN7<ѵ;j91 k:b `2f-i{(Y'fA g^U8֫.%hsVj7~ܻԟ#kԟG%vs8fhGjrdL޼ ؛`$D4h1l(ۖ#q6˟rw1#IH<w wQu$ y$b 42#ixǯ5鰈V̼U)E$/TŦ\.Z#S em$dž/q^E+_mAݧ4P 6Te`zJL뼺zHKY2mA'Z-gb=j~hkKM9-fEU~zGȃkC9{bhQ-%xku\`dV|=$K+ مƗ6 o}j|uDZVSӕFM~5V29ʄ77 3TPy&}LVK1b6I& .wV0l4ZOֻtLYC9Cp֟|~֓:'kvB#8}*'!A3vo6B?*28ĭYzHkȀ6ΉY k6b%;f* ذFdD/̀%fLNQ*!m C Xy߇qBYXh$7Qffhk(VMYZ&2rq}B:^!S$1w`,t%ڪi)RƥV>:_[ۢE$2ƭ#zB=e[)sEob2si3}k[XΟN`ɞIlDçϬ-y=˜rwGCO}Y Tzr|y%iaҬrܧGy+ԝrJ*j'BU )1ڲ%!hoѺbcc`NT W4l54렔>+1tq ;GSuEVr*v>]O$ rAlQ1*5 }HVMI)f)WƬGL4Q/׫/W X;Bat-tHFZ|k(ℵPRjs p-f ։"x:Al924SKbV`>(HY`YK'8ԾR٢u,ڂAzCֶf6[ ]mj⭨GJFf`:}E"',Xt %yeJnW U9{;E%r|MvەV}!4YmjN1Vv s-Sx^z|Vi*:;S@/qds puP>x=9ɟU4WTZ|`R dkTϢ]@@PC`d UĒңu}r1J8xeQr5y?أ؊ [>R,îUO1k>zդ%]&~)sa{af`:Nirgtmq gT%Y8Ў~T%[8hV7˃-_y<^W~~Uύ hW겖w$C>J^)PL5WlZ:wTw1<Y`[b㦓^֎$-ٰ.-5[O>kqLTS:J ˻P=@ GgOl6 FEw Ŷ$Dljl!bEjAJuB7$0f B!$1/ XaG^L[RW&M"-̾_`$ &hX(bFyt,uq3mb{+4$r:\GI<.ANTUy J(֌jp9֬Ѷ̢1 'dW7%l]RմZm&OPGdtABt}eߺF~zM&kr<"9?i@,b;s zZ$E/q(bX< 4YM:,Qdl}4b+*qN4>\[1rC7sCh6v%]U) /{`@J!5)'~Md!%Xu-|Z$3)}{WhI E:Zo3Z|:U=e^ ,=qɷk䟷Y%V_T㎷^[zEk \+)^}x1)Y9c"a)ux1Z'8:).<`QQZ9pqz-7%Сުu*a9.X4u[7StX6M "KG%-lCH1qhM*EawL!t웨wC8hܙ2  `^b@s(yjNo,8sc(0}F j.͹M0dNzV~(7o]YsG+|qPgw"b$yVF"lqXNoV`"%˪<0a"M[#}IԂ(`,)I0-NBWH8R#FOEfhn4@h.cZol@5r`x׻0fAOv D4; H? xU2O)&xp"UFpl,6^*ꉊX( ;2Luz6`w l<251O\d^c*Ay~51n{M6usE DĝIz?UQ$`SOu?^ Q,e9wPP*XmOrۀݞcL=&2w !"XZ#  ,ъ ^@fgFJ)vZݚr\vRt bA0΀5 C,AfѦ^0 RY:PȀ"MuUp*|\_\d(UGo)DOSڽ\o']Y֌@aF}\ O߯gW]zgҞ?Zŗ'<4h1yC^-5LcDNr=j]r*8`2*~5#iїJ` Kn X̸=|Qmq') 8x!d߉7_`%[%m/zV]6an%/*=}痷tnf`Ƭ 6]]́koBDW?M!0?Q5L "QGۣ,۰H+$9QB ^HȓtާVBIIIuu@VbHJ0`S@l8M|jꃥ`^)b$T[#ƶm*')9+rDEs ")vȒR5T2LT*I {j rm9C9w/St*VyU }*ͯ.tTt£*jiJ!// /կ9/@Rq!TD\P7޾89^OS_0RP)Pak3}n3c`קn[H)˻KMzk`!U.P 78}:J oׅAhZ*=zf_eԶ-)Lb9@|`}9QvH&I%N݈ѦG][]C)rP-f6æXf<ipw{Jtye=eXbAOEL)#8e FDLҘTҾktQ}-.[3kpl (l;<,us{U knb߿xd+('E$*1.pe&Jŏ8u: ܸQQ6t:mf"ђ$J?3VȈVZfG̤ 2k3tn ]^;:}fM{ciCctq1[:I:R?E5bl\vq𲤬Ӂ j'0 |,p)՚<ד T|^b+m^;X =6'RT>7Dʿ5+ S;&ιV"ւG,sX~:j87gs&r4sblYpiƵa\_Ŝnf#bͥ;۞ؕ+YuYfwm`8;LQA5׹g?)޻ ϳOs~WKaYBinq)W\?+n} eAcTw9ھ*>bFs/89G\ qp0)gA=8?d|1%G2]|cn ^{XExe+%l&SXA6˔1yCx5on1h}s3ݴy+1C0ngwlpeIJ;6Zl[l[}}l]y`z5#Zw'_}r6n}\rP*"أN,sMW_N 9xːR%_ߟEJ`h ɬp˝ WٶaT*9΀Z>C7Zyc; y2Srҧ90M޻]QW馿6Ik&ך8 kbW*ENjc$^XOkBbW u"uӫOIDf6#=RK.LL? fF{H ɓeR v[E/2*KлdҳRGJa'ҤV\[>~ǎ3WLP5EG%L-K+1k9'Bs:ŒEЫ;ΘfӼ!\0:]ofX%u3joYqMj:LG>-Or2:`bsT5` 17˙ =NDu0)u~z[ GWF_XNz2 PZmťD@$#gRKW'S< w̜b{>)1JZYk,c0R QSTz,DgnJ&J2[NZ2VxM%3Tq%f7L܀$b6^sք0S $8NG N,3h}deony {#܉B! X2HmR#8nƋh'pR+ϣ࠴Io9h 8Ƚ;EKM{1hoBb X)ʠF/|YUU70Sq^B2_Q^qցsN݂̓tKw>/H yjWjM\B/89AǨ~?O?A wh/N 7A^պljkk+ϊWT3:m^~Z4YwD 5wQL^=CSX*'!̖[ܬAYӉR1Ţ5ړD9;6,Ǩl=K@ij83Լ0k~ߗ]LLh3*q#%'ut+޺@(`wzߦoa4ڦ(ϊώA8~U ^s?.񾟹)YZnV|12$y W13[X4^̻4˖>>ӌݗ{'_swϾk$EH*~X#0rߊ۫k9H=#p: eYԃtܓ2LȌS99Pe^aFB.]ug l*O؊(:s".{0w:q*Xb#p{㽁t+/Gw{NɤѴw6p:X,Z1jZ 4h lCFQk',v-Ùjg V@ڨ9R' @r'9(5Δ8A"8ә&K9EOh4hZc&Aa1jNLIP!5=ƻ)) Q̅RGEwOP f;&#+)_?|/ƀ 0N.֍k/Xrhi1NM@,oVNjZYEjZ%pyn/^{&:7a0y@4hL Xpܱᐣղ\$R4K=LmBiM`u<^/8#= Z0O/[?LfEp!:FR"0dTZ#HIS1֕;ޱ(ҴZeTN<]B]΀~@cT|*}Ǩ?INǢ$&~9>P$H8姦ZSP͎ěŃ'5Ҝ v{'>ǵI&a8; OO#LLxiNzxB%*1>$"s עĶ\j GuDS.iFr=FV(x`4XHjWlkSFm,ѬYMNrUbi&q 2SRg9䤋PLAjEvF 1RJH%2b9fp !`zc*I"p Hל ) H3@)q5_Ju&|PkAH?vi31=XLDZe D5ϭD"$LMGxs+9|F(_}_^IOvxB_U=W~:$ wUQYg;"/g/ Y^QzۿI&m*0BDpYwo_'sVy-=io+GrEx_$8V#qva%xG- IiO %F( {ꮳd;fJ ߧxz{=GygA[OIt1_ǧGlz$|hez nK _tMޭb B m<=tV8` 8#ƙ觲f/#vZ|ʁj%6/]Gy}i¿6>sGo>\ekj70ZpDbWuMՍ=-N1l o nPqlpP#bJ kьA;M1v ,a-v˝ѧ}SN8 7UŇr崵K(›,FܚoBP u®_3 \J=V=L{*/a%+.֐nq&Hf8*D#9Y;۶Gi'nTzmû&n^o+Ƈ6:"<|zt.n985{{ѹX` \@G1 nLIGLm J%{3=2q&x}5Oc-s5U`HZ͘3lNv9_ʠqQ>,]4]. Wۯ-?__>|ss;c<-;"_ŋ F?m}/&}.2 }ŝ`QALFlh4ӱ %;q=8tehz#sakU˃swkBAc>}c>CV:Svز3YNJZaXĩ4u51ʌ#Vds Pf*fW`*d'f,MT;%,pbJT xKCZ::j]T5~jPcS'R}rR:TfO$TJ :ghpF#r\9 W:?+XO;{1WYNIvֺs~4XI9 YE  tOwc+{3Z-Z!V]U3(谇!w~A,ܴ7|C %Z%x-ؐ X"*<\D29ӣK߱E:W7iVXQiz\X(_>u?QS&4:/*oV2yhr-mvI\9˛WO.9eҀFaG>쓨5LUb$n~}wK2i)5+݅AgEAauQP;*/W< 1wr7Gaz)kubjo1_7t ~#34^!nDp!nF/M9 BCHv>FUW38Zx;;y}|*Cv;99 9~&gx /oIH:EMܳLp0G%g$͓8OfJgVD1 HYM`jݎ__2OelLjPXnZƻ3v8Ŕ-Q9Q?B!`:Mn4%TQǰ!t&r˃pYq,ػl:Н:fVE7ź ^-e1KXg1!1耿,kn5G:6nGIp Fb~ ChHItUj-+{ xOY"9!fG;nϭu|AWMb[_`Pm>>U#g[ xO7 Z=xن=L/~Ӄ;/'"^.o2=8>תXĥ̼v\,LM%`]"8&S -֙bKmm'T&mȔQ@ +\-\0@z:!xlv)U)#}&k{ç=J>Olz:|gU[w6] zo||J?"q/ң .?e6VQ6c:Lw)-IUomow](lW,w+X^|ܲ RAbmc#D&qDڹI}6$]88Y\7h @h;~_Uss(^V^뙋qluwY5:~J p|}au3÷1c%d FrCr$iPRP;b%i!!J41_P}V8@FIgK&6δ,6؄%y2)[Pcc ˲z.B UApO8LCB2A[ S4w撕hȩ2uliz91M ;6$a[]'v刷\X"#DiR>xRg/$9F֧K"XmRP#Z:v 1>[%B#݃,u\2|hM MR=NO{@`vmNJ))#ĩWe)w;7Mr"s'$fy`nsHL/Cܟnl|ꄈ=rq&C8C}UZg1'9!2 &!AM-Wѵb"JP__VTRm-&ުb}f v&>2 Y0lXߘ)G`J$9[ Ao̔JO' Rn*E(u &0)ہ:|x@L-,&U)x.V[>,@ŅRS Z`#Y. /A5q 6C:BA'G*N1!.#>)N:O&3,:aYSR&AX'p$<'1e<5i.~ޑ^6, %KSB32x"9̀RCXCIY )ײ虿갆k>CB[̸D `/QwZ* X7ϐefJ2rzWuyYW@s=XM#lt=~?4TjIXZk`ㅎo˃L+Y/yb :N!rw!ʲ\*3G咯R,I~xt_dB 6R̍ >(T3Xq. w?* JIDl8qD)AEdx81IEq+q?HMx&,EDlr0c˹"܍ZDt ` 'xp^1}fÄ́0y{}_۲1Cj=S^ou>~]:Lk?>^ylpϗ1z/my ehұܒ2qGXk㮔Iit`gY6k+u)UX˭ؽ$6Ii٫_@ EZ|Qb8tOtcD l'J4ѢY)wSF ?qoYΊӽ9GRϒ|I j89 }?- j.]e+ZLV7iϟg4 qVۙ]~Z7q3gG{헶Nn> 6Bv7M_ݾC? 67 =ӑԗ0Nv~rajf7(ݞݳ# KpYY k e8Aw<?.;Curhgp5yW_Y\zKdŷv< 7 RdqϩbqF3N~J~ITАUoB~ʅQ ~1}IINWgϜy>ro,y9~ZDPb=3g~TdW~@.{QF1 tHʉ_ r xbhBmbAdDE6ݧӭ53&m ~we.Eq%Ŝ5'C8)r0M0WDz%NDήxJW@c㌗ؓRhhOt‚&Am`GTV]NA2HX$AI]2X3Nj!%gY(~ɯ%C̝[%`D]y#2֍'cXR*x GNu`'{&Wp:"D0DD!ЄZ?DrML4qdq $me~9(4 $+(t?V(̓WJR!ﯜWl?bx=7Π~AgPkڟ{j(\k½58q{.AI7 m&׌آӉ*IHVJ51@iVpU82KHdyQEDFf88 ~٨ՓWkVӝ@9Hf~;#$b!Ʌw9;m8~^ g7ϧ;.jdM2Qs*_:Xyw՝֋Rq}ƏڭbHNQ nIzZ[MFέz\\Ɨ8:J%TkDX[X?cާ7zlBh˔|R}u[U@;1v-q[CWU)- /-*kzhL-pQ8{vG˛ޕg쭡^eǏj@-5GxmnU R[a}Gbgʃ*I%9ljJZdg[ľ5I'?: `QoڏUBn & ~{t;<_rЉF -h1ߜY@M_bQbUa~Nbꊉ`З)~{`4Qk"ebEqⷷr/Ѵ5a415 英t/Z3`j%.;Xv.Y/޼07w1AB6Khɩ;4K0) -T6ɖdF8DM5̐sdj-~Jq4*{+5\Y0%GXfb9T%u?;mD{ˏcqL8Ю3< 5_ Z]igwv c;j3^Xw<u ^'s2Td ֍| L{*wpQE<2xܡ;k$ć=+[Z 5_hSq!1Q(D(A-ņ9 ԆjXgZ.]%),~ia\1#.RM8Lh .R!O\ݾIRHHt"5_;TDj ]qj% ^URqAQZ(ckEEXYl`"bBlx-v%g[D8֨?6m%UNv?q?5xOs770U %))D$.(BEĆLΧ։ gYB,FXsk-N0O0J3!QbBcBLhp%c/;jZjzpmJ~A?߭bPҪHaL1#u,Ib6qr)ݐ JR- OXE ]P~]µ: H[U~veA{:BsFmShi3Xz=h?{$⨔22r.;׃TȹG P1\ u/۰W5^8ֈti -{u2jA]Q|T?RP̛nھaC{ƺ9p$[-K\V@9\np9`ƾ;2@G6\$Łr,U;ef31AtŒ@ P 0Jр BEK?NLs)TF|27r(ID`ZWp) 6 DvWW7Aߑǭ%i<7QmUWa¤äw1`h'\.1?Ga檌afWq Nbihm^PRڵ绢Tym4g+\ߤ)A5!xv&upw%47lQsX=F#TfS)G7Q")7D8j>m9R]M[ l:I/\Y`bAgah|m=!800gϯ9Jc"69$[dEhY=|~wĸQ|.[|M%kF'EH^m< /*dgۇWѪhՉw$=;z(>0XGYfcC=. 2MvhcE= wM*S8 NH[_ b>X;浘9* 0r˱cZ='O=v@Tqқ8M&db m'EdK%)'y2ϱVjx{K85P=uD ZBDo:DN]v#UQLZjkvVO1 B[&Nxčeن^2}XuFbw'kF:UIả ~.u`Q!R#' AK;SLS%.YaTFWYT|h,.=nR 9\6J0́".v$ؚi˱؜uɃb_eYB4d*C#P"\јlc_О_ o;A^ܬroN-Vi!M͖y)eV,3񮊧x ?: 'l- _w 8m ջ:9xSp,: n,p-qδ-*H=ﰤl9 {9dba+NG:Lې* U ('- ;Pr+b4޺xxviE," EYw3Zd/=cd= OZ9Uڂ%  J쾱j "_|'B݁;+׫;\pZZ!l9"feR m+b d[JL#Xv(|<¼T6h]PM])Ks ^` eS2bb.Hdm~\s} @ݗG}`4K2aWǣ%cg395T6}%a7q֧qj"0UJNؔ2dxce]&vK $ExPFM dudpvy@N+F4(EN[&4P&vDZwMn?¸S?H-[<@CA*VΨbk`]E6޵߷]Dݾ=fUEmT2ս9-!.%:6a9-~lb2-kLYX#YM 9~4NT|.paZXV9-dEq:L:IH:Ϟ .kYyZ(yT.-xP?JcNc7Fi}lUEXHGJkyqKya=0ж{&1xXV}d O=yGGo FraƢiS޻X4{.?ˑJ]t.? +cg[<+$yXlIr*Mxd{py۸X4= TOKRV(R;[>p`*N>KzI,jJيe.?ے_S2bNiy jK';z~yoc/Iw$vAl ,lj)d"(GBLemhzz0Jz-7a>khl zԮ׻9*EiAI;aXj^O>rxAJភM6vʖBYZ#Z19yh 4FIzwAp _xX-aM}7WO ?dj4yXԷ%j`c]l42ca/>4lUC1rl.dk`SHRr<~_UXf6%~m>|<ұA|P j}}R#ovL:dn|>G4i/_z&`O~.ۏ/uoֳT`"ϖd(傈9AiW*HdbIk`7ۿbRVqrxA=Y5NV /N75YV)I툿l]_ev+ad_2%Ãl[Zi8b-&N8(Q_j< aX3 \u,>K#`D;[k#@?9 1VnYȟM[o:̫zy涐3dLm-&T\Y:Q#e59#)[MܻYDڨX~"l_}˜5ی ܰ7n&. {k6haFzU>JQvC htڐ.Yٜ|p P.mJRU%E4[)(횪_wjK-nݧoECӻ*c/۲is}Dٲݤ ɦ\ d*Њb*E޴TO̅qTK[B껛#:]9rQ,Yƃ֨rj\M7iԨ}%ḓbYjk9Ly޽ 5\ m/s9/-x1Ee00E |xFp1hLIQޯ!GKѾyݻ->l[cEۉzow1JbbJE^ }de6"{wRa%яik;GB2RXr |YO3$AX*d "-kϽ"sigLځSpbMSAa1vónG_˱SrpzWQEŸ^i\fkcHD.%O?0V : 8FrTw)vK0QYbO9b2*U*(c+nؽW t{X0\`tt~ M]r>7ӪJ lɪUD6<4$K}Zi_/:}{ lyOhPv}L$΢Sx Y/"`s,@wHlQEGvQ//'b/,գu>XLDzj^x0>3FYoߜ.E[#Ta2M☁))Si#3o[G⇜' *GL竵Ә\))4`=?{Fr dߪ/kA&O}iS$͋vp%Q3)fUu鮪#У8!#%FGXPe!g6Z I&WVIggIshNENҤWj$|N߇9UI߭ۿ~t/_IW|&?}-T e5FOfQ<ctpb׳ N"ܝD{ \rÐV1};X*]_r5H)dprJ-o㬏֜JeM7}5Xؾ %AO“xK:9[ wݐwHKXZЧE)oo'v*JQ*+1"Mŀq̣׼3 `O[Z/-8;%S3-|s2}{ar%@|q.RTi69AÉ0LdRn(}raEiɮb݅_;]q%(Edhm#TA$r @ x\zvG`|x[].0'xӰt7!,'2i^dm$cfgaȻbZ^-¢9ϼ^n v>VƩĝ$R:^ Bs W|WPD.(W&v^cIOO}˃t QۥʜTZg)4CeGM'ʃlk }@IzI4?ɟYEV)NDLL1֕^kCѢ9W*)Th $>s=I&;zz>'.h|W+pp2wAF!(#sY&$j:R2uT|8B޻dN?br}eJ*\ՕQg#Upb^e1^GgTqP}K 'ܪ__;a{ Y+݇pg{ iJ'[e=V{lhm3Kd|U!ftn\4zzG`GuR-Fݽɳ c I~Қjru.{}0QyWpιowG%xx]1M%DF$O %בOAF;YSo!x`%7N%P.Tfd&*8a$нGtb?8ͧ>W{y1O.&iI]z5wKu7'b3_MɍO/ AL5¦;C{ m"7AC5jyC>Hs g cI gvq V|I]gKBNURaJ 9伛~o2][TF,WIuKt-t'7Yިjq_h17+8լy^ZpOFlftէoԭȬ1_lf(Td;)nLwsš=+cCنޚKew~uxu5?O l-cщՅ~s~q.!QSl<#,/?\|<_FsmӖ0ccbZ>#j:AW=VqKI7# rq0kOqNJ[# "Dfj@z Ik_&\|}o>]~R\ĢvUY5?8f|6.yW4L#?|w?-&gϸЪ p4M@9#*dE92&xWGUˇ&RAfD.09:CА4Bӑf{b@e5JL>OnʋM\r5eP -s>5rzkc]Xq9W͐sNG>+GU}Uȥ<-QoL߻y()$L4~lAdvn'jxW}R~+֗[1{Iv5h61AD xC;Oi#'6x V,ؐ$(#x iOP.LԃvR+p+fi7U()%j*9!ٖm0k_1f,;=]* ;423.&?N?etE_V @A;J,J[Ei `P{Qˆ}>j;6frx.I Zk'VQ 彧$e:FgAKA9M..!%3\Q.f` ՝Y)e"Wۢ4EYFaQYA0ғoJ[PpE g)on9~4J[0w/;_TI$2yϵV<*]E_IT_r~xFR9xnC^A)Ma6hmW04t A%@M5zO?PjbcI>?*Ib _(8g̣k!ƿ()]?&7?bc㈬(,;᭐{?U{ kds}˵=~Z3z$ôZSb1*1mâ,y F8:?ώc$#حIXC T|ЋNndIKx)H(>/G(n$$ΖyX2Y$EQ~>)C<4'Gӑ$v;_5wjQk?eUq_c"RZNʙ=f=X}b8Bܒ@aeFhTC?u BL WFA͔1Lx1N&8#%2UH?,j1TZf+kb0,{gI%BSӮXlnv׸k|,q%pO kjwhRxڅ-(OoCU)@JJ"[V^AE'lni ?{әkԖ&6H'*;ʣ ق< RL'@zPkgJ^7b1(uQ%eNi%6Z)}Wvh%> ] J @܋hй~-ƾʏ+%DO,^W;*S5H9/> EBY 4?c՗tAp|i۰o[D;|N> 8W\ɗ J#xfL·D0#0-C\FnNpxb{1س|RQ@tUgvV*U`7fiY:hwF~z$tSxլS%&Q+cpi)):TvÞЛ7L4ay#f =$:h}.Z_ٛxP*TшmLWWvKctE`ڦ)QAf~b4'Rg1ҀQ\ZǯҨB Y3H\pPʥReݠakK@BX)rBVa?W6m+뿢ѧvNXƞN'ɗD-z J)ER@XXE>X]ORAi2n Iua`w@s8.g4Bd>a\wvNX86+e1 {?\ǹ]ÁZ_@Lp5ε;uH LS"j8'Dt?56 N@]zc!=r 6T Jwh@PX/h!4/l<l&$D%/-A W${7pUB=GWe< #…{f2"AD*A=DHcN@bJSHÎi4ph& c֑8} G#u_3EeV#Vrb^S睦Ճ NI(< L U,!([%q@2D=r ZD ("R)3d"];Sy@lA`q~wj$5١QQ '$$Kg(WajP!Xp`s&*8)QS4u|4u !7 MI8O@DkȁX4jAHTqǂAy7G?p:wJ.J#Oz!UAE_4`yb q>cBPtA?l4qyΫb6Z ?v!m] ژ'2lׂB-頚z?2ˬWQYɷ K(i7(!H}YJć O]Ɣ쩰B>vJ'.kD#.K=mI[nw%e%Y֓R 'E8*>x%< AV=[ukYQo=w:'OD'G1RXېy<]:Y.˘vm+<ex2rT Y J\يQzE^`or!I H|}f@G5.#4#V˵=رTV5?P )ēP$B>f NWL6ט^E`&.>A QY"0 7Tӏ]aGњD|%7q ajMTyQTED:?BRW#ՃB}Wf !@_cݸ?y- %>ag`zndsoo,g&B( Fv }; M~uFw*7)%,5¹@̱OgC@unG7gl4Gwn7~ukp\ +OwS{1\Ȓ dN_kH%d<;#QT 3v:E"k 3 {Z7^ ( t"Ee2Rjb4HHBVF[@NGY7+W Θ\} {,ܽfZ--(^0|' Lq ſ@!n*Hξ|*1B?{_6G|+y; CK/)QEZg3J*.)^T2;>v+VhFQNZQBB?}2åX1ڇN A2R`\vvgOw;\t{ tԱ ~73lH]1űTǾ//~rߒK0y$סo7*뇃yu|Ltk.?yo_|~~6d ~8m4wn׃_Sl /Bi_L>'yLdTkD%oC7(/}i)uЕ|p}R*PvNQb֘XПOۯ<5F2d;~i-U-bjZYI˯8LNA5u~F-`qFr h+LES7p4eO_ɃJ~' Mh#u)J<$=^^k䕼*w/$l &|=ݰWoJPMF-TLze#,܃cBSh+cINPtYDhR!TvSNJa ޯsuy\Els%TZ"\$dc3S¤esYTfsMɄPD9S|sjc~ՌYÌ.1T栳x8ꅵq؎gc- i(>azσb!\yfgvp>b~6ɦ#|~v~;{^Uä)icMM >"A( ذYhN0_F\{r62 Aɷ[45 Pnې8㉗}\SKo%7^3.XT9lq9PXhS(޲k{zTooٺH Y'nɅTtB9:g$,|U8KRΨla+M}]w:(C4(:;t6XU˫_^\T(hz%0z>W=Jv}seФ:a )^ csF^Np]tq$5eb_w%ġhFhseǓp TpJ!N' `6 ^AUC&ɿ. vnCpۋҐJ#'}4L+lAVσλsuY0V&}t0 6:,N} / v4yP;%Gz• #Yu/;Z5~0Jâ`}H <_7?y߱w8>h|Y\_(<%={xl)-Ho9 Zjm{,rh J 20eֲ| ՍTM׃[=̆p;sx~-!uSZYc]Y@8vz%[E$^L'~4{f$=9C ׸O;?Ϯ>:fѸCgv{5+k0̭z,cʨTR$F[r|Y\`-Ra*VGvA#Sg\S89|܅IřfًS(4Q-Zێa0e' ^HwsmM%w^R1d;ǁl-!0xZjY _wTox#y#Ͼ3jMBlBMg8y_Npj.@Wsg)ѳ91wx6qEYGT濧L.jLu0)Eۙ.jraڝg#f>A TbrfV9CUG{q?u̵cUW4޹.YPD"oZ,9UJ+<%S"+N-eZ1{q纟?8Ԭ-v::p݄3d{KGml sqESͼ5{C9F̲s?t77nG;7tv{+chnoWߖl>u^Eچq8yMwv?Ն,bݽw1v#mO EdKZIIs_s^u#[؂$"|ih5nފ}`e7VzKY2!5Y)d-dsNQ1lno8ShRhq{ PҪQI^=RP3Xmso[ ~,3bTqk=rtr'>nޏǟ(cL._:QrI+eچj8(W=pP!buq쁍بŨ*zh^{:ݵ,[vkm}ť{72rR#a)cZ/{ZYS߿Ԛ7\Cʹ>eZRkEMQmfU8G7Ny*\Xp2n(MN GIrRn0(8+$Ԓr} s:n+wͻNwmRYa"oԗKRK"*(OVz|V*"uaʨ-+P9ψlwfPY-H;Bsa eV42xR3 ]=HƨVLy^"/(Go !}9J 3^j([r4 2iq[)gqVYRAX&/D*Б[iy*UBƵE$?\JObfa/Eu7a~tV[?{Z[؛kׄqA޵~Hq@~+?np]f{4%`hKׄ!cR_T2z@MxE_Rb ~aڒ CZЄ ECjS tjsUV+Xy Tpy)QDBU Zv{InN _ re?\p4Ÿ8E\ $TْS_N@rdBISp.l:*1ziH0G٩Y_Q)R?c}K\x(<[Z]f^M5+u%]bpv ۊ)wǩjʍ Q )N_߬>dqFLé_j {Wȍ&BxL9A I%%nɒ>:"ca6U_Xt,'HRr! pv/VK%#u:hfAg+:hdh+ [<ͫ l;XMmV`Q\?4`\,'5xWG#_Q%zҦ&_&oPPbhZۣm/Pb{gfø%lq\K4b/ZE}yu/ %ۃv1^\-F/t<*9pس_k}np/. ݉~Y65B=a_h`A SA4! 9qk"yKM9/JR*"b-XR 3V{ ¦%ZkR Ю(ȻٌbMIeɒܪh%"iΙ*BdSVIh4ֶ:$V)+.F#%PiFz#tX.52]d3VSZN 7nYE)58$gEs:,2isf\VBh_4x/9({ve>eŻϣE}sJv笋"EE~.@/Ziҏ?n6gu7kF u, :ChAW.ʱR/ud[VұywT2ݔyp;ߓJ_ {tVۢ}:!`O>RMrBܓ~DlPӏoRtZrQ(r*'L|//ʕRcJ%XJx y 6= ")]J]V$+ OefYVgGlۤPC+犁}x%-\js`[ %lWUdfRaxiPJV1*r ߈3+ )xvb6 1m^b[mijgv\i`Zcq]-1:9T֤64C blcK' %&ҵ-QzTll˶w]L5}fC K>XHhxZͳVySQN ߔ[N =L3$hژv /| /~y/#>~ R>;k=WKl#ە' DsnJpXO_53\LDy೑05'=_\<iǿ_EH:kA9j_m06e|EB%hdX9OXuFd$=ں12பbS¸ x%=s^Ő oR:)3DyB|tRA>҃Peټv9J7I}$51]J8CTtmbR?ysV\MҐj *Q9m8 ,HBpMyCM{B1-̭R_Z:% G.Ӫ'>:]ϭĿXgZkT Jg^?M9JZ$Mv&RbJax_ &6[1#fxT.V#{0E<yWv isz {ZIeoNZ! i1=jA(Vۻ;C돡zg3n# FB& KΩTw/q?c{.s^jS6|%fBfS.$T|}|xma~VG/Da_y#>oBat_^Pek/|Rʜ 24) 4@,EG-FNNgSa# " &D1B.QpDRVS sQ%KmX((T(4ݣ`A*FU8˫=WjeKRSꔝE~&Lq^-:jDl,! /Oę+WJ obYj#ZKvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006106243615136213237017710 0ustar rootrootJan 27 18:05:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 18:05:43 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.443506 4907 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452676 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452742 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452750 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452758 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452764 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452770 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452778 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452785 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452793 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452799 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452806 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452811 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452816 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452822 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452828 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452835 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452842 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452850 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452857 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452864 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452879 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452886 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452892 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452897 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452903 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452908 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452913 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452919 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452924 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452929 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452935 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452940 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452945 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452951 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452957 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452962 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452967 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452973 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452978 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452983 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452989 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452994 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452999 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453005 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453010 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453015 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453021 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453026 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453032 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453039 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453044 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453052 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453061 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453069 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453077 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453083 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453090 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453097 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453106 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453111 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453117 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453122 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453128 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453133 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453138 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453143 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453148 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453154 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453159 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453164 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453170 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454096 4907 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454124 4907 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454138 4907 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454147 4907 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454155 4907 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454162 4907 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454171 4907 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454179 4907 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454185 4907 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454190 4907 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454196 4907 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454202 4907 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454208 4907 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454213 4907 flags.go:64] FLAG: --cgroup-root="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454221 4907 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454227 4907 flags.go:64] FLAG: --client-ca-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454233 4907 flags.go:64] FLAG: --cloud-config="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454238 4907 flags.go:64] FLAG: --cloud-provider="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454244 4907 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454250 4907 flags.go:64] FLAG: --cluster-domain="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454256 4907 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454262 4907 flags.go:64] FLAG: --config-dir="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454267 4907 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454273 4907 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454281 4907 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454287 4907 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454293 4907 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454299 4907 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454305 4907 flags.go:64] FLAG: --contention-profiling="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454310 4907 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454315 4907 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454320 4907 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454324 4907 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454331 4907 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454336 4907 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454341 4907 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454345 4907 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454351 4907 flags.go:64] FLAG: --enable-server="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454355 4907 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454362 4907 flags.go:64] FLAG: --event-burst="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454367 4907 flags.go:64] FLAG: --event-qps="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454372 4907 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454377 4907 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454381 4907 flags.go:64] FLAG: --eviction-hard="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454388 4907 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454394 4907 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454399 4907 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454404 4907 flags.go:64] FLAG: --eviction-soft="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454408 4907 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454413 4907 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454417 4907 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454422 4907 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454429 4907 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454435 4907 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454440 4907 flags.go:64] FLAG: --feature-gates="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454446 4907 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454451 4907 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454455 4907 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454460 4907 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454465 4907 flags.go:64] FLAG: --healthz-port="10248" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454469 4907 flags.go:64] FLAG: --help="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454474 4907 flags.go:64] FLAG: --hostname-override="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454478 4907 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454483 4907 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454487 4907 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454492 4907 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454496 4907 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454501 4907 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454505 4907 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454509 4907 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454514 4907 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454518 4907 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454523 4907 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454528 4907 flags.go:64] FLAG: --kube-reserved="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454532 4907 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454536 4907 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454541 4907 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454546 4907 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454550 4907 flags.go:64] FLAG: --lock-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454571 4907 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454576 4907 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454581 4907 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454591 4907 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454596 4907 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454603 4907 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454608 4907 flags.go:64] FLAG: --logging-format="text" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454613 4907 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454620 4907 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454625 4907 flags.go:64] FLAG: --manifest-url="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454631 4907 flags.go:64] FLAG: --manifest-url-header="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454638 4907 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454643 4907 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454650 4907 flags.go:64] FLAG: --max-pods="110" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454655 4907 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454661 4907 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454667 4907 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454672 4907 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454679 4907 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454684 4907 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454690 4907 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454705 4907 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454711 4907 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454716 4907 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454720 4907 flags.go:64] FLAG: --pod-cidr="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454725 4907 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454734 4907 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454738 4907 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454743 4907 flags.go:64] FLAG: --pods-per-core="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454747 4907 flags.go:64] FLAG: --port="10250" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454754 4907 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454759 4907 flags.go:64] FLAG: --provider-id="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454763 4907 flags.go:64] FLAG: --qos-reserved="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454767 4907 flags.go:64] FLAG: --read-only-port="10255" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454772 4907 flags.go:64] FLAG: --register-node="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454776 4907 flags.go:64] FLAG: --register-schedulable="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454780 4907 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454788 4907 flags.go:64] FLAG: --registry-burst="10" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454792 4907 flags.go:64] FLAG: --registry-qps="5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454796 4907 flags.go:64] FLAG: --reserved-cpus="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454800 4907 flags.go:64] FLAG: --reserved-memory="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454806 4907 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454811 4907 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454815 4907 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454820 4907 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454823 4907 flags.go:64] FLAG: --runonce="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454828 4907 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454832 4907 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454836 4907 flags.go:64] FLAG: --seccomp-default="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454841 4907 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454845 4907 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454850 4907 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454855 4907 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454860 4907 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454865 4907 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454869 4907 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454874 4907 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454879 4907 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454883 4907 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454887 4907 flags.go:64] FLAG: --system-cgroups="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454891 4907 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454899 4907 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454916 4907 flags.go:64] FLAG: --tls-cert-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454920 4907 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454926 4907 flags.go:64] FLAG: --tls-min-version="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454930 4907 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454936 4907 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454940 4907 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454945 4907 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454949 4907 flags.go:64] FLAG: --v="2" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454956 4907 flags.go:64] FLAG: --version="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454963 4907 flags.go:64] FLAG: --vmodule="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454969 4907 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454973 4907 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455099 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455106 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455110 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455115 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455119 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455123 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455128 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455132 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455135 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455139 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455142 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455147 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455153 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455157 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455160 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455164 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455169 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455174 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455178 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455181 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455187 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455191 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455195 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455198 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455202 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455206 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455210 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455213 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455218 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455221 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455225 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455229 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455232 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455237 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455241 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455245 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455249 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455253 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455258 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455262 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455266 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455270 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455273 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455278 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455288 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455292 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455296 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455300 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455304 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455307 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455311 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455315 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455320 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455324 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455327 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455332 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455362 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455367 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455372 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455377 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455382 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455386 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455392 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455396 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455402 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455407 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455411 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455415 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455419 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455423 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455427 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.455443 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.466387 4907 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.466721 4907 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466814 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466828 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466833 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466837 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466843 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466847 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466851 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466854 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466858 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466862 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466865 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466868 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466872 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466875 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466879 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466883 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466888 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466896 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466900 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466904 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466908 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466912 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466915 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466919 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466922 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466926 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466930 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466934 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466938 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466942 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466946 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466951 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466956 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466961 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466967 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466972 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466976 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466980 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466983 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466987 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466992 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466997 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467001 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467005 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467009 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467012 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467016 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467020 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467024 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467027 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467031 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467035 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467038 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467042 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467045 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467049 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467053 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467056 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467060 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467064 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467067 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467071 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467075 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467079 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467083 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467086 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467090 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467093 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467097 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467100 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467104 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.467113 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467237 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467246 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467251 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467255 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467260 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467264 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467268 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467273 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467278 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467283 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467289 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467294 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467298 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467302 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467307 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467311 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467314 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467319 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467322 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467326 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467330 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467335 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467340 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467344 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467383 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467391 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467395 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467399 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467404 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467407 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467412 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467416 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467421 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467425 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467430 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467434 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467438 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467443 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467446 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467450 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467454 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467458 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467462 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467465 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467469 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467473 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467477 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467481 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467484 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467488 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467491 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467495 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467499 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467502 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467506 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467509 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467516 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467520 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467524 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467528 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467532 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467535 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467539 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467543 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467547 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467551 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467568 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467572 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467576 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467580 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467585 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.467592 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.468851 4907 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.480435 4907 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.480661 4907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.482837 4907 server.go:997] "Starting client certificate rotation" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.482902 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.484003 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 08:24:48.887067554 +0000 UTC Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.484161 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.513802 4907 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.519698 4907 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.520993 4907 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.540763 4907 log.go:25] "Validated CRI v1 runtime API" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.600781 4907 log.go:25] "Validated CRI v1 image API" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.603198 4907 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.614406 4907 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-18-01-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.614435 4907 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.630843 4907 manager.go:217] Machine: {Timestamp:2026-01-27 18:05:45.625573775 +0000 UTC m=+0.754856407 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0be71cc9-e3e6-47b6-b7c1-354451a0e2c5 BootID:a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f4:37:48 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f4:37:48 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2d:a0:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1e:ee:22 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:0a:33 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:83:8f:1a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:51:71:61:4a:99 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:81:e8:8e:52:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.631087 4907 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.631265 4907 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633533 4907 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633727 4907 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633771 4907 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633966 4907 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633976 4907 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634517 4907 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634535 4907 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634734 4907 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634847 4907 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641205 4907 kubelet.go:418] "Attempting to sync node with API server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641230 4907 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641247 4907 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641263 4907 kubelet.go:324] "Adding apiserver pod source" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641274 4907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.655422 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.655643 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.655731 4907 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.656362 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.656465 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.658899 4907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.663269 4907 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665025 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665053 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665062 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665070 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665084 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665094 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665106 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665120 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665130 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665142 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665176 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665184 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.672382 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.673274 4907 server.go:1280] "Started kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.673761 4907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.674302 4907 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.674820 4907 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 18:05:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677325 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677459 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677502 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:58:15.511369193 +0000 UTC Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677475 4907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.677732 4907 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677844 4907 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677858 4907 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677963 4907 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.678081 4907 server.go:460] "Adding debug handlers to kubelet server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.679275 4907 factory.go:55] Registering systemd factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.679299 4907 factory.go:221] Registration of the systemd container factory successfully Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.688737 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.688870 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.689018 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689320 4907 factory.go:153] Registering CRI-O factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689475 4907 factory.go:221] Registration of the crio container factory successfully Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689685 4907 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689801 4907 factory.go:103] Registering Raw factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689893 4907 manager.go:1196] Started watching for new ooms in manager Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.690763 4907 manager.go:319] Starting recovery of all containers Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.688929 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea8a898b63c40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:05:45.67323552 +0000 UTC m=+0.802518152,LastTimestamp:2026-01-27 18:05:45.67323552 +0000 UTC m=+0.802518152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698858 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698935 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698947 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698957 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698968 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698984 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698995 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699003 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699018 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699028 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699038 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699051 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699061 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699075 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699084 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699094 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699123 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699132 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699149 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699159 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699167 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699175 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699184 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699193 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699202 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699213 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699225 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699235 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699246 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699255 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699263 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699273 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699283 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699293 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699304 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699312 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699321 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699330 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699339 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699348 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699360 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699370 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699381 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699390 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699399 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699409 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699418 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699429 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699440 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699449 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699459 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699469 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699482 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699494 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699508 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699520 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699633 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699647 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699657 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699668 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699678 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699688 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699701 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699711 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699720 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699729 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699739 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699759 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699768 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699778 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699787 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699797 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699822 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699832 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699843 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699853 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699866 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699879 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699889 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699900 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699910 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699921 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699930 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699941 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699955 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699964 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699976 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699986 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699998 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700009 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700020 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700030 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700040 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700049 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700058 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700069 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700079 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700089 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700099 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700109 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700119 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700130 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700145 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700155 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700165 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700176 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700187 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700198 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700214 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700226 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700236 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700246 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700255 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700266 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700275 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700285 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700297 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700305 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700316 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700326 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700336 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700345 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700355 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700366 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718651 4907 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718775 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718790 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718829 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718850 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718867 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718880 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718899 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718912 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718931 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718947 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718965 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718988 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719005 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719020 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719039 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719055 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719073 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719093 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719112 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719150 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719164 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719244 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719259 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719280 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719307 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719321 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719352 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719368 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719384 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719525 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719540 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719609 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719625 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719644 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719667 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719683 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719730 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719765 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719792 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719828 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719853 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719872 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719895 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719914 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719932 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719955 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719972 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719997 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720017 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720032 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720051 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720067 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720085 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720103 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720120 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720180 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720204 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720226 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720244 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720263 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720284 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720302 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720327 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720342 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720358 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720379 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720397 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720422 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720437 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720485 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720499 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720520 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720535 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720566 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720584 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720597 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720616 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720725 4907 reconstruct.go:97] "Volume reconstruction finished" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720734 4907 reconciler.go:26] "Reconciler: start to sync state" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.726137 4907 manager.go:324] Recovery completed Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.736191 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738865 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740722 4907 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740753 4907 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740789 4907 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.741171 4907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.744800 4907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.746655 4907 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.746706 4907 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.746750 4907 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.748167 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.748278 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.778870 4907 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.783179 4907 policy_none.go:49] "None policy: Start" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.784096 4907 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.784124 4907 state_mem.go:35] "Initializing new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.847283 4907 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850015 4907 manager.go:334] "Starting Device Plugin manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850074 4907 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850090 4907 server.go:79] "Starting device plugin registration server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850723 4907 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850747 4907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851052 4907 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851217 4907 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851238 4907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.858416 4907 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.889967 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.951688 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952777 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.953498 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.047959 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.048254 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050495 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050729 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.051056 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.051108 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052386 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052545 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052611 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054457 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055011 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055186 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055275 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057530 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057680 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057741 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059333 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059482 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129864 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129907 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130233 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130370 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130659 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130927 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.154077 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.155975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156102 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.157152 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232421 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232657 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232716 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232733 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232774 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232905 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232923 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232943 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233005 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232967 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233115 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233114 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233207 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233538 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233596 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233621 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233641 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233746 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.291348 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.398102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.410599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.435865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.470152 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.482238 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949 WatchSource:0}: Error finding container 64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949: Status 404 returned error can't find the container with id 64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949 Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.483875 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.484437 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101 WatchSource:0}: Error finding container a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101: Status 404 returned error can't find the container with id a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.484929 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98 WatchSource:0}: Error finding container c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98: Status 404 returned error can't find the container with id c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.499287 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36 WatchSource:0}: Error finding container 1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36: Status 404 returned error can't find the container with id 1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.502621 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3 WatchSource:0}: Error finding container 618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3: Status 404 returned error can't find the container with id 618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3 Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.558278 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.559930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.559982 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.560032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.560068 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.560501 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.678469 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:50:40.032565152 +0000 UTC Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.678975 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.753898 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.755131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.755995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.757292 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.758268 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101"} Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.883702 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.883780 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.893692 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.893741 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.906686 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.906730 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.092211 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 27 18:05:47 crc kubenswrapper[4907]: W0127 18:05:47.157380 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.157515 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.360959 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362524 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.363098 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.568967 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.570270 4907 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.678647 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:06:35.353799524 +0000 UTC Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.678857 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.761997 4907 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.762039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.762109 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763230 4907 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763342 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763715 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763926 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764192 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766028 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766202 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767402 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.768479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.768746 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769898 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770284 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770298 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770416 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: W0127 18:05:48.675339 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.675962 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.678798 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.679431 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:55:26.977016952 +0000 UTC Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.693471 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.774929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.777187 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.779216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782588 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7" exitCode=0 Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782749 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.785519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.785721 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: W0127 18:05:48.866165 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.866312 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.963529 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964765 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964805 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.965384 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:49 crc kubenswrapper[4907]: W0127 18:05:49.547410 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:49 crc kubenswrapper[4907]: E0127 18:05:49.547522 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.678671 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.679592 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:39:10.61299432 +0000 UTC Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790604 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3" exitCode=0 Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790690 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790796 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792744 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794437 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794493 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798291 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798443 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803899 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803925 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803899 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805058 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: W0127 18:05:50.298314 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:50 crc kubenswrapper[4907]: E0127 18:05:50.298420 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.454276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.544471 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.679674 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:22:09.729665723 +0000 UTC Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.777659 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814076 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814156 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814103 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814170 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814210 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814161 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815313 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.680138 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:58:50.930046981 +0000 UTC Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.688391 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.701166 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.709445 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821655 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035"} Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821967 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.822006 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.824599 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.827971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829181 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.166086 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167759 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167872 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.429998 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.680608 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:27:20.120320942 +0000 UTC Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824510 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824511 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824746 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:53 crc kubenswrapper[4907]: I0127 18:05:53.681026 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:39:31.550545982 +0000 UTC Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.395015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.395330 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397283 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.681152 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:53:25.440928903 +0000 UTC Jan 27 18:05:55 crc kubenswrapper[4907]: I0127 18:05:55.681544 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:33:58.870535369 +0000 UTC Jan 27 18:05:55 crc kubenswrapper[4907]: E0127 18:05:55.858507 4907 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.266380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.266649 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.277860 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.682089 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:54:42.103823268 +0000 UTC Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.835802 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.244258 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.244539 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248297 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.683174 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:37:11.955700687 +0000 UTC Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.669056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.669725 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.678140 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.684215 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:14:42.883939648 +0000 UTC Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.840896 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:59 crc kubenswrapper[4907]: I0127 18:05:59.685270 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:31:59.751730374 +0000 UTC Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.304510 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.304616 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.310150 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.310227 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.685741 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:56:36.990930832 +0000 UTC Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.669379 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.669498 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.686013 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:38:33.711033084 +0000 UTC Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.701742 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.701822 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.719757 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.719983 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.720522 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.720659 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721343 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.727522 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.848679 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849178 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849276 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.568731 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.568802 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.686760 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:27:13.68824493 +0000 UTC Jan 27 18:06:03 crc kubenswrapper[4907]: I0127 18:06:03.687636 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:16:10.614043394 +0000 UTC Jan 27 18:06:04 crc kubenswrapper[4907]: I0127 18:06:04.688025 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:52:51.44747346 +0000 UTC Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.284213 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.285101 4907 trace.go:236] Trace[1290662536]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:53.883) (total time: 11401ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1290662536]: ---"Objects listed" error: 11401ms (18:06:05.284) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1290662536]: [11.401862523s] [11.401862523s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.285140 4907 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.306949 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.309026 4907 trace.go:236] Trace[1461441112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:54.294) (total time: 11014ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1461441112]: ---"Objects listed" error: 11014ms (18:06:05.308) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1461441112]: [11.014146868s] [11.014146868s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.309085 4907 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310534 4907 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310633 4907 trace.go:236] Trace[1335700862]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:54.272) (total time: 11038ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1335700862]: ---"Objects listed" error: 11038ms (18:06:05.310) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1335700862]: [11.038245768s] [11.038245768s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310672 4907 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.314087 4907 trace.go:236] Trace[752683230]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:52.724) (total time: 12586ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[752683230]: ---"Objects listed" error: 12586ms (18:06:05.311) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[752683230]: [12.586590125s] [12.586590125s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.314199 4907 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.319786 4907 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.652975 4907 apiserver.go:52] "Watching apiserver" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655081 4907 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655387 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655924 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656103 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656206 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656123 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.656243 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.656443 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.657439 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.657538 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.658052 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.668177 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669113 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669447 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669642 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669759 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669809 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.679879 4907 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.680775 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.688728 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:52:39.906880258 +0000 UTC Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.704289 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713371 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713504 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713655 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714766 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715887 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714545 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714706 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715240 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715838 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716580 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717192 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716440 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716656 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717814 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717940 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718048 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718285 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718293 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718431 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718464 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718491 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718510 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718570 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718593 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718635 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718653 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718689 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718709 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718725 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718745 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718764 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718801 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718822 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718839 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718860 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718877 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718998 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718537 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718580 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718831 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719011 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719210 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719237 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719253 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719303 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719322 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719339 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719377 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719398 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719460 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719482 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719597 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719615 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719654 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719722 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719741 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719826 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719894 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719916 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719937 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719978 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719998 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720017 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720057 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720078 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720103 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720152 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720175 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720220 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720309 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720375 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720396 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720415 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720474 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720492 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720530 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720575 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720598 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720624 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720673 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720700 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720749 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720768 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720790 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720927 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720946 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721009 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721033 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721074 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721159 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721178 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721215 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721237 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721258 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721297 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721319 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721338 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721375 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721426 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721526 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719258 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727827 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719486 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719694 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719651 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719788 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719881 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719992 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728230 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720096 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720106 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720148 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720205 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720596 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720600 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720965 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721283 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721353 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.721537 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.221514398 +0000 UTC m=+21.350797010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728610 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721631 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721894 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722204 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722876 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722932 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728684 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728749 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728835 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728876 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728954 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728989 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729025 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729063 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729102 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729141 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729181 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729243 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729279 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729316 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729351 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729385 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730002 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730134 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730208 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730247 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730284 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730383 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730422 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730456 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730498 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730590 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730704 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730819 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730930 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730975 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731092 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731131 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731317 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731544 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731605 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731645 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731723 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731760 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731835 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731876 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731915 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732119 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732155 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732195 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732179 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732268 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732756 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732842 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733040 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733105 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733219 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733288 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733352 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733414 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733471 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733695 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734205 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734321 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734404 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734520 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734704 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734853 4907 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734889 4907 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734923 4907 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734956 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734987 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735023 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735055 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735085 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735114 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735146 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735175 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735206 4907 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735238 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735269 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735309 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735343 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735385 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722968 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722989 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723383 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723900 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724078 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724187 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724283 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725015 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725118 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725400 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725503 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726060 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726524 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726814 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727202 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727218 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727455 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727758 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727810 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729284 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729638 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729860 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730095 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730290 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730792 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730940 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733296 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733324 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735689 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735827 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735841 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735930 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737635 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737747 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738323 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738740 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738782 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738959 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739154 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739259 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739288 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739349 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739417 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739979 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740527 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740572 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740872 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741161 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741220 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741254 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741729 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742342 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742762 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743392 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743581 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743626 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744518 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744900 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745232 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745236 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744548 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745874 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746326 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746417 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746442 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746512 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746719 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747023 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747150 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747347 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748396 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748484 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747594 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747612 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747752 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747730 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748354 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748381 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748465 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748607 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748732 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748759 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749020 4907 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749549 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.750835 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751573 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751847 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752051 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752296 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752502 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753029 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.753091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.753117 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753306 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.253285185 +0000 UTC m=+21.382567797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753448 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.754045 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.254004517 +0000 UTC m=+21.383287199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.754635 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.756007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.756679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.757262 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.757846 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.765007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.765929 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.767388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.767864 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768088 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768330 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768855 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.769020 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.769216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772624 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772658 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772678 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772783 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.272733031 +0000 UTC m=+21.402015833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.773098 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.773264 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.777724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777889 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777915 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777933 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.778002 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.277973189 +0000 UTC m=+21.407255801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.779196 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.788026 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.788889 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.790701 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.790919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.791131 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.798458 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.809851 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.813150 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.814207 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.820862 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.830218 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.830951 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.831868 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.832972 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837142 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837212 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837629 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837646 4907 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837657 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837669 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837680 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837689 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837699 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837708 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837718 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837727 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837735 4907 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837744 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837752 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837761 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837769 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837777 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837786 4907 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837794 4907 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837803 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837813 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837822 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837832 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837841 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837851 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837860 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837868 4907 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837876 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837884 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837893 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837903 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837911 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837920 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837928 4907 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837939 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837948 4907 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837957 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837965 4907 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837974 4907 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837982 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837991 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838001 4907 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838009 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838019 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838027 4907 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838034 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838043 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838051 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838059 4907 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838069 4907 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838077 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838086 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838097 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838108 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838129 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838138 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838146 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838157 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838166 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838179 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838190 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838200 4907 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838209 4907 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838217 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838225 4907 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838242 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838252 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838261 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838270 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838277 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838286 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838294 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838301 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838309 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838317 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838326 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838334 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838342 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838351 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838360 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838368 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838376 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838384 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838392 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838401 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838424 4907 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838434 4907 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838442 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838454 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838462 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838471 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838479 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838489 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838497 4907 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838506 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838515 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838523 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838531 4907 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838546 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838571 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838581 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838589 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838596 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838604 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838612 4907 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838620 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838628 4907 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838637 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838644 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838652 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838660 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838669 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838677 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838685 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838692 4907 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838702 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838711 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838719 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838726 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838734 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838744 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838752 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838760 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838769 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838777 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838785 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838794 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838802 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838811 4907 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838820 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838829 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838837 4907 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838846 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838856 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838864 4907 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838874 4907 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838882 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838890 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838899 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838907 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838915 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838923 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838932 4907 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838940 4907 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838948 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838956 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838964 4907 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838973 4907 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838981 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838989 4907 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838997 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839005 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839013 4907 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839020 4907 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839028 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839036 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839044 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839052 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839065 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839073 4907 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839081 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839090 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839097 4907 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839105 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839112 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839121 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839129 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839137 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839145 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839153 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839160 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839168 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839950 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.840077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.841281 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.847381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.848409 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.851671 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.857249 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.857797 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.860259 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.860342 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.865691 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.867374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.868480 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.869349 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.876624 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.876880 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.877466 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.877921 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.879475 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.880202 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.880769 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.883734 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.885348 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.886883 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.887744 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.889608 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.890468 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.892862 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.893678 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.894514 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.895104 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.895986 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.897352 4907 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.897521 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.900026 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.900730 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.901843 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.904090 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.904898 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.906111 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.906751 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.907986 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.908920 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.909700 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.910185 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.910839 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.912214 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.912919 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.914164 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.914912 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.916714 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.917301 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.918421 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.919096 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.920256 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.920453 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.921210 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.921786 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.935877 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939524 4907 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939548 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939572 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939600 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.945850 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.957512 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.971642 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.983837 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.990682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: W0127 18:06:05.998443 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336 WatchSource:0}: Error finding container 718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336: Status 404 returned error can't find the container with id 718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.999490 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:06 crc kubenswrapper[4907]: W0127 18:06:06.005593 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14 WatchSource:0}: Error finding container 5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14: Status 404 returned error can't find the container with id 5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14 Jan 27 18:06:06 crc kubenswrapper[4907]: W0127 18:06:06.017358 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f WatchSource:0}: Error finding container b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f: Status 404 returned error can't find the container with id b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.241850 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.242041 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.242002774 +0000 UTC m=+22.371285386 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343000 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.342994 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343051 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343060 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343070 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343116 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.34308767 +0000 UTC m=+22.472370292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343123 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343139 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343129312 +0000 UTC m=+22.472411944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343015 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343154 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343160 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343149092 +0000 UTC m=+22.472431714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343187 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343168083 +0000 UTC m=+22.472450695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.688911 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:37:35.970235387 +0000 UTC Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.876694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880320 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880390 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880408 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.883520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.883670 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.887228 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.891625 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" exitCode=255 Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.891689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.906655 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.907258 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.907679 4907 scope.go:117] "RemoveContainer" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.920618 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.941839 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.962136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.980094 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.996356 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.016831 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.033790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.049487 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.068630 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.083645 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.096601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.111416 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.254507 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.254767 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.254729204 +0000 UTC m=+24.384011816 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.277300 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.290662 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.291981 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.294283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.307522 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.323259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.343059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355632 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355726 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355861 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355904 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355919 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355929 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355993 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.355967615 +0000 UTC m=+24.485250237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356032 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.356022047 +0000 UTC m=+24.485304669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355882 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356052 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356075 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356093 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356080 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.356072369 +0000 UTC m=+24.485354991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356155 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.35613208 +0000 UTC m=+24.485414702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.358666 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.372631 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.383752 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.401798 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.416840 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.430390 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.451790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.467538 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.493331 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.518968 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.536641 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.689170 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:39:23.088228689 +0000 UTC Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747805 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747862 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747977 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.747967 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.748069 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.748245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.896303 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.898873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01"} Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.898926 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.914685 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.929259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.950139 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.971115 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.988190 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.004208 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.022175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.036919 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.673288 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.677030 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.680489 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.689747 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:24:28.529424852 +0000 UTC Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.695027 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.708017 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.721967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.740296 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.755679 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.770087 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.784856 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.797706 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.818934 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.834249 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.846903 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.862121 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.875281 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.902654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.904659 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586"} Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.922776 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.935967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.947176 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.961293 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.977282 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.993298 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.009987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.026985 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.053064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.070415 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.088220 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.102095 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.274765 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.275030 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.274985931 +0000 UTC m=+28.404268573 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376490 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376770 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376857 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376775 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376884 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376946 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376793 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377040 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377007775 +0000 UTC m=+28.506290427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377087 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377172 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377197 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377106 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377055767 +0000 UTC m=+28.506338419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377337 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377299734 +0000 UTC m=+28.506582386 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377355726 +0000 UTC m=+28.506638378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.690773 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:05:31.712301113 +0000 UTC Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747595 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747603 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.747775 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747831 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.747850 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.748019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.413424 4907 csr.go:261] certificate signing request csr-x65vj is approved, waiting to be issued Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.427469 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9plnb"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.427854 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.431484 4907 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.431540 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432254 4907 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: secrets "node-ca-dockercfg-4777p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432280 4907 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432286 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-ca-dockercfg-4777p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432302 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432320 4907 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432371 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.438238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-n4rxh"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.439802 4907 csr.go:257] certificate signing request csr-x65vj is issued Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.439975 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449388 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449412 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449588 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.484352 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.524790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.550045 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587337 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587391 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587414 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587581 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587681 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.602398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.619520 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.638438 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.653932 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.666734 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.677703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.686855 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688158 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688294 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688423 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.691067 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:30:46.613308311 +0000 UTC Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.705259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.709047 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.721525 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.736445 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.755177 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.755488 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.768202 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317dc29e_e919_4bac_894d_e54b69538c31.slice/crio-9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a WatchSource:0}: Error finding container 9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a: Status 404 returned error can't find the container with id 9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.781881 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.800882 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.817922 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.834021 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.848786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.861483 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.882155 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.910498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4rxh" event={"ID":"317dc29e-e919-4bac-894d-e54b69538c31","Type":"ContainerStarted","Data":"9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a"} Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.956313 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wgvjh"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.956861 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.959492 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960101 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960326 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960395 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960520 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.978205 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.988391 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.008314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.026883 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.056735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.092229 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093474 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093543 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093675 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093778 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.122892 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.140732 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.159406 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.176864 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.193051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194352 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194673 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.195416 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.201170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.221250 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.226603 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.267907 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.280228 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.289675 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.362704 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jqfkt"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.363375 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fgtpz"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.363692 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.364055 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.365078 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.365953 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.366978 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367011 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367470 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367621 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367765 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367871 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.369050 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371381 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371416 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371426 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371390 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371447 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371450 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371487 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371488 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371517 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371534 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371535 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371461 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371539 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371637 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.384759 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.397213 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.411059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.423314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.435640 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.441585 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 18:01:10 +0000 UTC, rotation deadline is 2026-11-20 01:58:00.209791915 +0000 UTC Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.441633 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7111h51m48.768164158s for next certificate rotation Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.457315 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.473757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.492944 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496819 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496876 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496961 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497045 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497133 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497203 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497299 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497327 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497532 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497569 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497619 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497638 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497681 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497707 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497732 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497757 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497781 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497808 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497889 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497966 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498028 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498138 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498241 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.499018 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.508175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.532364 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.550146 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.568011 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.582456 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599562 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599624 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599620 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599667 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599677 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599774 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599832 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599880 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599946 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599956 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599968 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599980 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599988 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600007 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600025 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600045 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600090 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600152 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600210 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600230 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600271 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600336 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600432 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600434 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600527 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600596 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600621 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600623 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600635 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600658 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600702 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600724 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600749 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600753 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600882 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600989 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601060 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601071 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601092 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601122 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601132 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.602645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.603342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.603775 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604024 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604271 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604322 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601309 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.627749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.629988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.651700 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.671451 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.678694 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.685037 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.685093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686743 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686730 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.691770 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:00:26.530111799 +0000 UTC Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.698202 4907 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.698543 4907 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700519 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.710505 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.721014 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.725095 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726576 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.741629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.741910 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745422 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.746897 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.746992 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.747234 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.747303 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.748194 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.748359 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.756723 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.760927 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765886 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.776650 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.777034 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779995 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.780007 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.788985 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.799657 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.800210 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803297 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803858 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.820136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.833510 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.844772 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.846143 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.854795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.857217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.869933 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906572 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906587 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.915998 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4rxh" event={"ID":"317dc29e-e919-4bac-894d-e54b69538c31","Type":"ContainerStarted","Data":"c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.917697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.917745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"eaf287f038c6113d87a2fe2ea86f1dd42eb5276b3a2451ac4f13444e9acd40ce"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"fce9339f716c71b2355a7ba713d746483ccf60e21cfd2fff6b4b274849362374"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.924326 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerStarted","Data":"74cf4023a97668c3ea831b4d657244a86add4e21c5a78c8e7879854228b82275"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.932572 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.951897 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.968980 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.982384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.997439 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009588 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.013479 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.018152 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.021056 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.037314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: W0127 18:06:12.039273 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195088d8_09aa_4943_8825_ddd4cb453056.slice/crio-eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9 WatchSource:0}: Error finding container eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9: Status 404 returned error can't find the container with id eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9 Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.055757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.072572 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.093506 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112809 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112828 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.120660 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.165221 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.210275 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.211294 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.212006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.230155 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.269687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.277614 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.299217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317153 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317194 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.339146 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.381982 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.418818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420128 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420176 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.449941 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.451571 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.481757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.489446 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.495493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523904 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.544414 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.595417 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: E0127 18:06:12.601577 4907 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 27 18:06:12 crc kubenswrapper[4907]: E0127 18:06:12.602244 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib podName:a62f5e7d-70be-4705-a4b0-d5e4f531cfde nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.102212075 +0000 UTC m=+28.231494707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib") pod "ovnkube-node-qj9w2" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde") : failed to sync configmap cache: timed out waiting for the condition Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.621054 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626855 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.650149 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.679589 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.692296 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:46:44.953782435 +0000 UTC Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.718375 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729516 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.759886 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.800780 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832060 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832179 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.839707 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.880777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.921586 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.929720 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0" exitCode=0 Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.929828 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.930378 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.931262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9plnb" event={"ID":"195088d8-09aa-4943-8825-ddd4cb453056","Type":"ContainerStarted","Data":"d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.931322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9plnb" event={"ID":"195088d8-09aa-4943-8825-ddd4cb453056","Type":"ContainerStarted","Data":"eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.933973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934157 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.980256 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.022777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037666 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.065480 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.100431 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.116643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.117619 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140601 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.141062 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.179346 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.192481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: W0127 18:06:13.204468 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62f5e7d_70be_4705_a4b0_d5e4f531cfde.slice/crio-a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946 WatchSource:0}: Error finding container a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946: Status 404 returned error can't find the container with id a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.221902 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242842 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242865 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.258764 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.301916 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.318344 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.318514 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.318488062 +0000 UTC m=+36.447770684 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.337322 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345315 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.378767 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419983 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420075 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420142 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420164 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420210 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420224 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420153 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420133405 +0000 UTC m=+36.549416017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420269 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420245108 +0000 UTC m=+36.549527730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420081 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420326 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420339 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420327 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.42030254 +0000 UTC m=+36.549585162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420365 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420357942 +0000 UTC m=+36.549640554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.428010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447799 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.464764 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.500774 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.540703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550393 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.576964 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.618277 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652347 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652369 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.692596 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:01:10.006072975 +0000 UTC Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747269 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747421 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747532 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747406 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747749 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857721 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.937261 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c" exitCode=0 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.937420 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939469 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71" exitCode=0 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939522 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.960654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.973243 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.016736 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.038310 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.054818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064786 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.074324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.089950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.110632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.121882 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.135499 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.153590 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.166278 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181831 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181919 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.183287 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.196672 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.209012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.219468 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.258722 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.284709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.284997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.299657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.344645 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.381075 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387921 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387995 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.388023 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.388039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.417039 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.460411 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489880 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489917 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.504358 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.542321 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.576789 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591812 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.617785 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.657125 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.693158 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:29:21.02005567 +0000 UTC Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694615 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694648 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.701633 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.745128 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.780520 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.796781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797031 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797057 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.818209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900139 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947470 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947579 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947602 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.950348 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd" exitCode=0 Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.950376 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.966955 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.984648 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.000150 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.001955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.001992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002029 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.019629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.031592 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.059191 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.098100 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.105020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.105038 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.141629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.178336 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206792 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206806 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206816 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.221334 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.256914 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.298382 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309765 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.349841 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.377433 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412530 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412568 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.417568 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.482225 4907 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514527 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514652 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617229 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.693911 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:56:30.295421154 +0000 UTC Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720898 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720984 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747726 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.747886 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747940 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.748045 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.748205 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.773112 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.790609 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.809335 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823745 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823797 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.825738 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.843595 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.878870 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.910989 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.925998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926133 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.930424 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.944068 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.955208 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e" exitCode=0 Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.955251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.956820 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.973496 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.989033 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.001795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.014027 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.026207 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029054 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029082 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.058136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.102669 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131536 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131680 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.137726 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.178248 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234826 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234892 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234928 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.235657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.265359 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.302303 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338417 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.343889 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.381786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.419647 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.459696 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.501756 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546305 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.548234 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.580489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.620683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649351 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.716423 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:08:19.273087622 +0000 UTC Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752502 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855381 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.957657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.957989 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.963085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.966372 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b" exitCode=0 Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.966409 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.985341 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.003424 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.021281 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.036814 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.060544 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062580 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.075703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.087678 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.101291 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.110030 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.123316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.144071 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.156371 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167027 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167118 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.168950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.180976 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.220284 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271980 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380474 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586883 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586966 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.587017 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692072 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692177 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692254 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.718458 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:59:22.571155137 +0000 UTC Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750071 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750194 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750574 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750636 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750677 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750723 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795509 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795527 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795603 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.974146 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3" exitCode=0 Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.974194 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.987488 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.009198 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010982 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.011016 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.011041 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.026664 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.042002 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.055031 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.068107 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.087896 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.102442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.113808 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114333 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114363 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.128465 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.141084 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.159849 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.181975 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.195804 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.206965 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216638 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216671 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319065 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422561 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422645 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525448 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525490 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628950 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628979 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.719456 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:34:12.386011194 +0000 UTC Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732258 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835563 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835935 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939501 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.984781 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerStarted","Data":"018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.000221 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.016682 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.030624 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043184 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.057290 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.084791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.106743 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.128175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.143423 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145651 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145694 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.165875 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.182385 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.200070 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.213489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.226821 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.241619 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248733 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.255194 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351795 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351849 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351875 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454483 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557774 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661209 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661348 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.719721 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:19:38.255929884 +0000 UTC Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747739 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747753 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.747940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.748038 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.748245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765574 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867912 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867963 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971650 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.992358 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.018010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.035973 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.050216 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.064323 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074658 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074751 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.079762 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.095874 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.112720 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.127010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.142926 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.158316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.173050 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177470 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.189887 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.205122 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.219524 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.239285 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280163 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382825 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382927 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486514 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486741 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.589766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590358 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.693952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.720652 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:45:10.103945187 +0000 UTC Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.797954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798168 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901320 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996827 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996877 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996891 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003776 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.024006 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.024088 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.047455 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.060682 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.070609 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.084309 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.096786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.114931 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.143230 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.157107 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.169754 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.185209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.203038 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209571 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209785 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.219330 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.235703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.249563 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.267813 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.292500 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312809 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.314446 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.327431 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.337773 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.363917 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.387683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.404942 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.405386 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.405629 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.405604206 +0000 UTC m=+52.534886828 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416061 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416130 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416142 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.419513 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.432978 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.447270 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.464246 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.480037 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.493347 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.506934 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.506985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.507017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.507057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507126 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507143 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507154 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507158 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507209 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507187428 +0000 UTC m=+52.636470030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507170 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507266 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.50723924 +0000 UTC m=+52.636521852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507281 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507300 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507287421 +0000 UTC m=+52.636570093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507322 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507343 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507429 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507401714 +0000 UTC m=+52.636684386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.508437 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518188 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.526302 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621278 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.705675 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.721886 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:56:48.160403388 +0000 UTC Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723524 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723559 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723574 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747894 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.747987 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.748092 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.748248 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.764548 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.813632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814557 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.831746 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.837319 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842921 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.858630 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.863713 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867679 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.875070 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.881786 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886156 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.894698 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.901450 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905330 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.917731 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.919721 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.919832 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921935 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921966 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.929025 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.940383 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.952162 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.966036 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.978012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.991381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.002680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.015059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024793 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127261 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230241 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333096 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333186 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436171 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539554 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643187 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643229 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643247 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.722892 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:34:38.078026048 +0000 UTC Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849551 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849694 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953747 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.007535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.012336 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" exitCode=1 Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.012391 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.021006 4907 scope.go:117] "RemoveContainer" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.039748 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.057005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.057017 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.059211 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.079510 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.112325 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.133313 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.150683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.159954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160434 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.168391 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.185409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.202674 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.223967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.242321 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.260440 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264679 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264700 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.289078 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.307734 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.321868 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367776 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.368057 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471651 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471764 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677775 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.723217 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:30:01.816931376 +0000 UTC Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748073 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748142 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748227 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748224 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748370 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748722 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781891 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.782018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.782038 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.787972 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb"] Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.788846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.793274 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.793376 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.811293 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.823634 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.838383 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.859004 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.881011 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885212 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.895938 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.912287 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.926608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935771 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935874 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935924 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.947976 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.969141 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988436 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.991140 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.008183 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.027652 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.032293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036470 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036926 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.037605 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.037886 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.038232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.046448 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.046603 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.063487 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.068101 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.079145 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092852 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.093650 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.106107 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.109375 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: W0127 18:06:24.120798 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe1d896_28da_48d2_9a3e_e4154091a601.slice/crio-154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383 WatchSource:0}: Error finding container 154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383: Status 404 returned error can't find the container with id 154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383 Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.129051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.142907 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.161273 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.179596 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201842 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.212196 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.226498 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.249023 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.304277 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309265 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.320832 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.336741 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.351879 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.368796 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.383790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.397276 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412767 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514987 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.515007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.515020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617489 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720428 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.723772 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:08:26.91955299 +0000 UTC Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823503 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926868 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926881 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029444 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.042603 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.043279 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.045906 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" exitCode=1 Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046005 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046083 4907 scope.go:117] "RemoveContainer" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046984 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.047245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049538 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049636 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.064240 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.080384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.100917 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.119051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131851 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.135501 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.150611 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.164921 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.179941 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.201102 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.217625 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.231654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234969 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234991 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.235007 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.248173 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.262262 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280075 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280491 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280817 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.280909 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.295209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.310181 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.324748 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338487 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.339360 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.352260 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.357631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.357715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.368743 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.379496 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.398995 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.412841 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.423409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.436374 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442583 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442603 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.453283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.458473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.458574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.458751 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.458856 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:25.95883061 +0000 UTC m=+41.088113412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.466021 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.479169 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.485794 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.489880 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.500004 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.513197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.533466 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545337 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545372 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.548467 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.690861 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.690997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691065 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.724788 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:42:16.968043399 +0000 UTC Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747743 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.747940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747961 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.748010 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.748126 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.763656 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.785230 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793762 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.803476 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.818601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.832460 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.850650 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.877324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.896631 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897647 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.915901 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.929675 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.944987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.958824 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.968711 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.980324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.991022 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.994371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.994583 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.994651 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:26.994635219 +0000 UTC m=+42.123917831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.001840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.008718 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.028540 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.055850 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.060757 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:26 crc kubenswrapper[4907]: E0127 18:06:26.061062 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.080824 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.095126 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106461 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106709 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.119749 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.132860 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.148333 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.160731 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.180032 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.194885 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.208117 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.211838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212316 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.221064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.231349 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.248950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.268063 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.293674 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.315014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.315028 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.322493 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.343482 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.418007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.418021 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.521009 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.521029 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624162 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624175 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.725444 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:47:23.061337105 +0000 UTC Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727681 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.728018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.728109 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.747127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:26 crc kubenswrapper[4907]: E0127 18:06:26.747288 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831642 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933798 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.007543 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.007739 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.007841 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:29.007815714 +0000 UTC m=+44.137098326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041286 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041307 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.144933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145044 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351637 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351739 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454755 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454776 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454789 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558262 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661277 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661394 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661491 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.726208 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:03:10.379438039 +0000 UTC Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748178 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748267 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748390 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748637 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764173 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970177 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073240 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073287 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176845 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279877 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279886 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279900 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279911 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382915 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.383022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.383040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692223 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692269 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.726633 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:18:09.046961465 +0000 UTC Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.747642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:28 crc kubenswrapper[4907]: E0127 18:06:28.748377 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.794948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795040 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795053 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.897955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.001002 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.030713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.030965 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.031047 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:33.031020287 +0000 UTC m=+48.160302949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103966 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207168 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207712 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.310862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311553 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311804 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.312023 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.416655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417511 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521166 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624231 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624286 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624349 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.726989 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:21:26.415429877 +0000 UTC Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727247 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727286 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748199 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748284 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748472 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748685 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830509 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830618 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934496 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038243 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038273 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349292 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349323 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349334 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452528 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555674 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658553 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.727649 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:56:43.652500938 +0000 UTC Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.747075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:30 crc kubenswrapper[4907]: E0127 18:06:30.747257 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762678 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.865430 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.865994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969767 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969834 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.072872 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073682 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176810 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280876 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.384038 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.384128 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487697 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487748 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.590968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694294 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694315 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.727953 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:38:34.48266556 +0000 UTC Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747628 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748129 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748280 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748503 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797147 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900700 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003234 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.105794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106425 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210947 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226774 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.246361 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252218 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252292 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.265578 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269497 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.282455 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287362 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287425 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.301838 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.322727 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.323146 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426772 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426868 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530695 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530771 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634601 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634643 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.728904 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:07:25.041195135 +0000 UTC Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736875 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.737001 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.747159 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.747344 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.839955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.839998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840009 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.943722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048829 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.074156 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.074390 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.074485 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:41.074458416 +0000 UTC m=+56.203741068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152541 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152647 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256307 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359118 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462193 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565397 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668668 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.729871 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:52:48.501216569 +0000 UTC Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747301 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747332 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.747600 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747790 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.747914 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.748006 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.772963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773050 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773092 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876224 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978309 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.080972 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081096 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.285970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.286006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.286015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.287733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.287783 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390633 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493880 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493900 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493915 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597469 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701294 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.702050 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.730536 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:11:08.226406603 +0000 UTC Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.748062 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:34 crc kubenswrapper[4907]: E0127 18:06:34.748238 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806282 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909847 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909869 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013448 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013494 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116297 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116352 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219226 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322760 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322807 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425809 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425919 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425980 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529777 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529869 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632804 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632858 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.730915 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:47:54.560604386 +0000 UTC Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736352 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747193 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747219 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747302 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747467 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747713 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.771197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.788059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.808657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.827939 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.840013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.850294 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.864426 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.880908 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.894504 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.916898 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.934430 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.942718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.942918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.955600 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.970438 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.983201 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.997680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.011144 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.022899 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.037708 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046810 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149349 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.252946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253023 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253042 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253103 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355926 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459349 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562452 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.665954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666064 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.731761 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:13:15.197237451 +0000 UTC Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.747165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:36 crc kubenswrapper[4907]: E0127 18:06:36.747340 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.768992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769130 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769148 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872661 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976323 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976366 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079124 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182464 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182591 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285418 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.419374 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.419795 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.419587357 +0000 UTC m=+84.548869979 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492204 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525341 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525428 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525912 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525965 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525994 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526062 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526108 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526070 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526210 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526275 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526305 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526104257 +0000 UTC m=+84.655386869 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526418 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526386276 +0000 UTC m=+84.655668918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526462 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526434997 +0000 UTC m=+84.655717639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526551 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.5265268 +0000 UTC m=+84.655809452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596224 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699923 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699970 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.732708 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:20:41.239384591 +0000 UTC Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747337 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747401 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.747687 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.747864 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.748042 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802968 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907668 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011700 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217473 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320598 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.422920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423042 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423055 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526524 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630226 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630327 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630343 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733247 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:29:33.857948139 +0000 UTC Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.747821 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:38 crc kubenswrapper[4907]: E0127 18:06:38.747988 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835439 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835497 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835527 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938718 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143834 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246181 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246378 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349769 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349844 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349873 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453262 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556224 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556348 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659887 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659956 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.733875 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:02:45.914038029 +0000 UTC Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747454 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.747768 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.748001 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.748218 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.749610 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764701 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764782 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971781 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075522 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075658 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075675 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.111588 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.114453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.115828 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.145931 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.164591 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178056 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178100 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.181763 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.202283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.222805 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.240932 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.255760 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.268545 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279030 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279594 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.292497 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.304481 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.319109 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.330215 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.343618 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.358237 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.371629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382170 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.391608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.459118 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.469016 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.475316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484689 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.489022 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.499677 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.510677 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.525421 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.541666 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.554818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.567547 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.579207 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587413 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587503 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.596712 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.622459 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.644380 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.656741 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.666041 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.674381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.686906 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.690971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.690999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691036 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.698132 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.734552 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:05:04.33079498 +0000 UTC Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.747098 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:40 crc kubenswrapper[4907]: E0127 18:06:40.747224 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793492 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.896890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.896977 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897061 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000721 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103827 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103998 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.119535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.120652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.123847 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" exitCode=1 Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.123945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.124006 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.124897 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.125129 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.153304 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.163764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.163988 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.164063 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:57.164043975 +0000 UTC m=+72.293326587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.170668 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.186422 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.201945 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207570 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207618 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207629 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.214397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.229320 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.245133 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.261200 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.278752 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.293667 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.308123 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.310006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.310097 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.331118 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.354353 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.370883 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.384502 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.396651 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412290 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412659 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.440229 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618229 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618243 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721762 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.722024 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.735679 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:17:20.097687167 +0000 UTC Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.747211 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.747454 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.747911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.748110 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.748283 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.749212 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825329 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825668 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031919 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.032020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.032039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.131605 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134395 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.136435 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.136703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.148486 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.163511 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.188019 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.201574 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.215851 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.227550 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237258 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237296 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237338 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237368 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.253253 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.263716 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.284914 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.299253 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.313647 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.327171 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.339397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.355278 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.369680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.380368 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.391200 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443305 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443436 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545935 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545949 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.561967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562619 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.583579 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587669 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587741 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587862 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.601079 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605634 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605881 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.625290 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.629012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.629022 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.647054 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651077 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651101 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.664873 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.665132 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666970 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.737161 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:55:30.489115348 +0000 UTC Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.747698 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.747891 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769668 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769719 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872361 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872416 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974760 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974827 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974864 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077392 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077457 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077493 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180392 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180467 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284744 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284777 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284796 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.491958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492055 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492108 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595522 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699669 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699689 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.737826 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:24:47.932440394 +0000 UTC Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747427 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747506 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747549 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747709 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747821 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747999 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803252 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905250 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110931 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213585 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213616 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317365 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317382 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420858 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420872 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523771 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523786 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627964 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627992 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731048 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731145 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.739507 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:28:41.559494092 +0000 UTC Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.747906 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:44 crc kubenswrapper[4907]: E0127 18:06:44.748117 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.834925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835048 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835067 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938136 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048159 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151856 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255960 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255999 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360211 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566997 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670487 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670660 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.739774 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:36:13.549761736 +0000 UTC Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747372 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.747489 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.747726 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.748104 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.763365 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773783 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.788247 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.805410 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.820632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.835728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.862616 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.886158 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.904345 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.918601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.937122 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.950526 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.964850 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979667 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979959 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.995231 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.010837 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.022995 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.038608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.063258 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081572 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081609 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081637 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081648 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184592 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184659 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.287174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288393 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288765 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.289544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393781 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496756 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702247 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702410 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.740977 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:31:58.024132806 +0000 UTC Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.747338 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:46 crc kubenswrapper[4907]: E0127 18:06:46.747522 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805354 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908212 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908230 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011695 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011714 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.114655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.114957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225252 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.328656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329531 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.432882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433534 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.539115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.539265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642279 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.742652 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:41:01.705211671 +0000 UTC Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747684 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747835 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.747890 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.747969 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.748066 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.848600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.848958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849047 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849164 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849257 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055330 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.156945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157675 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.364332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.364511 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468727 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468997 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.571944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572982 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675395 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675785 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.676062 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.676202 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.744253 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:53:11.29771291 +0000 UTC Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.747328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:48 crc kubenswrapper[4907]: E0127 18:06:48.747605 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.778815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779612 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.985906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089993 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.194005 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.296996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297569 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297653 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400724 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503296 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503341 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606729 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606772 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709994 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.746059 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:30:41.79856 +0000 UTC Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747686 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747730 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747788 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748069 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748128 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748196 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813581 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918234 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021820 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021859 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124947 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.125190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228747 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228800 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331565 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331638 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331650 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434123 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434220 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.640959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641143 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745741 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.746890 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:51:51.907062364 +0000 UTC Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.747116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:50 crc kubenswrapper[4907]: E0127 18:06:50.747396 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.848996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849132 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952149 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054987 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.055003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.055013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157507 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260825 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364457 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467166 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570240 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747435 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:58:17.742717141 +0000 UTC Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747474 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747508 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747501 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.747774 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.748175 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776044 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776120 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776171 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879394 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879434 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982660 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086608 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189697 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293128 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293156 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395345 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.497927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.497994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498028 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498047 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600337 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703742 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.747277 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:52 crc kubenswrapper[4907]: E0127 18:06:52.747515 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.747621 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:50:41.415202085 +0000 UTC Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806737 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908903 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908921 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001137 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.013146 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017329 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.030508 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033580 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033595 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.045228 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050736 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.062776 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066861 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066975 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.080269 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.080388 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.082008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.082018 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184187 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184213 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287123 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389288 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491745 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491843 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491859 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697394 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747684 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747755 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:21:26.726815581 +0000 UTC Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.747887 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.747986 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.748046 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799990 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.800005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.800014 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903874 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006520 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109833 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212464 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314926 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314964 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520448 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623092 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623137 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725050 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725135 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.747674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:54 crc kubenswrapper[4907]: E0127 18:06:54.747813 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.747859 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:45:45.398796095 +0000 UTC Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.827010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.827020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929276 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929387 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031955 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134719 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134822 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236847 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236880 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339803 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339868 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339886 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.441963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442016 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544166 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647145 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.747857 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748034 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.748088 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:07:27.679249826 +0000 UTC Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.748233 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748387 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.747800 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748936 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750209 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750228 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.771397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.793637 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.807319 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.820047 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.832203 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.842340 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.853985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854056 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854091 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854404 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.866108 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.878832 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.890974 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.905678 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.925894 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.950575 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956255 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956280 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.966949 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.985315 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.997012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.013415 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:56Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.027196 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:56Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.059957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060040 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265712 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265759 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265798 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.369952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370438 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472431 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472497 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575688 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.747645 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:56 crc kubenswrapper[4907]: E0127 18:06:56.747934 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.748160 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:56 crc kubenswrapper[4907]: E0127 18:06:56.748241 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.749107 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:54:14.379692558 +0000 UTC Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781418 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884878 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090592 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090636 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.247106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.247300 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.247429 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:29.247400112 +0000 UTC m=+104.376682734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295485 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295590 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397923 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513522 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513568 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616364 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719484 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747118 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747218 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747305 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747464 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.750042 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:04:28.065301516 +0000 UTC Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821521 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923971 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.025949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.025997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026069 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128463 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230672 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230698 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332863 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332905 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435674 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435702 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537978 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.538003 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640573 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640583 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743734 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.746979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:58 crc kubenswrapper[4907]: E0127 18:06:58.747099 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.751197 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:39:59.381038056 +0000 UTC Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846353 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948665 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948778 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051749 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051806 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154710 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193621 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193672 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" exitCode=1 Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.194044 4907 scope.go:117] "RemoveContainer" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.207539 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.218598 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.236483 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.252443 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.257002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.257048 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.272634 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.287409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.298742 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.308523 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.321265 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.331228 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.343386 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.356967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359283 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359292 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.367740 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.382524 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.393045 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.408380 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.421639 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.432435 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470824 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470865 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572959 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747616 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747678 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747737 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.747825 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.748044 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.751420 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:09:36.249288924 +0000 UTC Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778314 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881168 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983811 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086865 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086903 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086951 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189851 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.198017 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.198067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.211149 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.221398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.233071 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.245649 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.258113 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.269264 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.280475 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.291108 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.292986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293035 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293091 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.304481 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.322384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.334728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.349078 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.363191 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.376791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.407735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.426987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.442699 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498286 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.600928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.600988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601047 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704315 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704384 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704433 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.747958 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:00 crc kubenswrapper[4907]: E0127 18:07:00.748189 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.752144 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:30:52.299722405 +0000 UTC Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807795 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910806 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910820 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.012984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116178 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218515 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321052 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321164 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424729 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528344 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631375 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734785 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734813 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747515 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747608 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.747706 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747757 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.747926 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.748002 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.752457 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:49:41.346015523 +0000 UTC Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837264 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939883 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939924 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042526 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042628 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042705 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145347 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145411 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248224 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351536 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351643 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455419 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558713 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662634 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662649 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.747616 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:02 crc kubenswrapper[4907]: E0127 18:07:02.747870 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.753639 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:26:02.882215952 +0000 UTC Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765574 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869198 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972636 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.075929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.075994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179114 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197366 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.213717 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.241002 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277623 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.294389 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298294 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.314912 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319629 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319655 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.332256 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.332388 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334552 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334653 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437102 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540134 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540188 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643302 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746121 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746218 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.747708 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.747841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.748031 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.748039 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.748136 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.748301 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.754282 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:40:21.121491901 +0000 UTC Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848857 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848981 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951361 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054432 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156615 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156645 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258588 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258601 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258627 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362820 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568700 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.747781 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:04 crc kubenswrapper[4907]: E0127 18:07:04.748235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.754449 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:56:10.269990817 +0000 UTC Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.764084 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.773976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774028 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774084 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877666 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877681 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980384 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980393 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980420 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082697 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082711 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082750 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185899 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185985 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392386 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.494934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.494988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495024 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597716 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597729 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597749 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597762 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700828 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748161 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748382 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748537 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748381 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748754 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.755235 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:02:52.867434457 +0000 UTC Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.770345 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.788512 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805971 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.810358 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.837051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.850680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.884873 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.905549 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908649 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908741 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.920204 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.936479 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.960875 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.979627 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.001134 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011729 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.021111 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.036980 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.053954 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.073227 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.089395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.110544 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115065 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115125 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.127480 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218147 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321465 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424606 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424740 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528311 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.631979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632080 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632097 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734983 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.747241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:06 crc kubenswrapper[4907]: E0127 18:07:06.747426 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.755634 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:36:02.668413296 +0000 UTC Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837875 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837922 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941405 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045225 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045397 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149120 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149245 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.252738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253198 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356541 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356589 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.459002 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562818 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.665939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747934 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747945 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748097 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.748215 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748348 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748427 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.756423 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:56:20.049021493 +0000 UTC Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769151 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871807 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975285 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975312 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079170 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182549 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182719 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.228585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.232354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.233651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.255464 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.273241 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285585 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.307442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.322299 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.337384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.350227 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.360517 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.376821 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388402 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.393657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.412198 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.426977 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.441657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.464225 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.474801 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.489442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490803 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.499676 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.512354 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.522780 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.535735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592726 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695420 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.747981 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:08 crc kubenswrapper[4907]: E0127 18:07:08.748152 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.757055 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:33:52.432561307 +0000 UTC Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798177 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901755 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901850 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005610 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108148 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.212001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.212019 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.239767 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.240826 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246341 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" exitCode=1 Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246408 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246488 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.247524 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.247796 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.284125 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.306074 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315910 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315996 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.321993 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.346422 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.365664 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.388909 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.406054 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419746 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.431624 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.447791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.464498 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.480162 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.483491 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.483695 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.48365456 +0000 UTC m=+148.612937222 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.494400 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.507309 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.519342 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521887 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521912 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.534777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.547039 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.556302 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.564728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.575306 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.585545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.586552 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588022 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588049 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588052 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588063 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588071 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588123 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588102631 +0000 UTC m=+148.717385243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.587952 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588301 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588270256 +0000 UTC m=+148.717552918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588014 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588357 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588348898 +0000 UTC m=+148.717631590 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588476 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588462611 +0000 UTC m=+148.717745243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727600 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747946 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747976 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747949 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748099 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748256 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748333 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.757376 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:13:27.464826909 +0000 UTC Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830246 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933243 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140685 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244342 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.252512 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.258090 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:10 crc kubenswrapper[4907]: E0127 18:07:10.258433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.274318 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.291426 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.310598 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.326414 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.343530 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348514 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348602 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.359795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.388008 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.403117 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.418398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.449894 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.452013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.476583 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.492654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.513776 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.527197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.543918 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554460 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554493 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.557489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.578074 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.591361 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.602854 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656851 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656890 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.747515 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:10 crc kubenswrapper[4907]: E0127 18:07:10.747955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.757853 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:26:02.814091342 +0000 UTC Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862036 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862164 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862175 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964365 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067594 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170863 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170927 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378024 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378187 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481964 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.482020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.584994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585134 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748057 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748095 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.748877 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.749070 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.749264 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.758889 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:53:13.634605027 +0000 UTC Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791541 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.894399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895236 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895739 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.998613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999309 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102549 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205790 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.326528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.326981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327121 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327188 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.429965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532468 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635117 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738747 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.747263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:12 crc kubenswrapper[4907]: E0127 18:07:12.747404 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.759832 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:45:34.207482495 +0000 UTC Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842315 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.945721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.946767 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.946908 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.947051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.947195 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050465 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154093 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154121 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263319 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.264074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.264614 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.367954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.367998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368038 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.472021 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526526 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526550 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.550346 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556473 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.571498 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577539 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.598778 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604936 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.626704 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.632015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.632030 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.649631 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.649868 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651915 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.652002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.652015 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747422 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747516 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747662 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.747581 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.748241 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.748294 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.759970 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:38:43.86760472 +0000 UTC Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862910 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966297 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070055 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070077 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174035 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174551 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174667 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278504 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.380991 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381071 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381152 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.483904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484036 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484092 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586090 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689178 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.747436 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:14 crc kubenswrapper[4907]: E0127 18:07:14.747641 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.761094 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:36:27.74474011 +0000 UTC Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792140 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997112 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997152 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100213 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100366 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100392 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203363 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306938 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409908 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516388 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620241 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723349 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723401 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.747948 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.747995 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.748022 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748155 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748277 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748396 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.761933 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:25:50.366935437 +0000 UTC Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.765465 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.798062 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.820540 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826191 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826228 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.842100 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.860518 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.886217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.919055 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.929959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930050 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.941530 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.955064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.969839 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.982012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.993834 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.006328 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.022395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.034005 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035047 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035795 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.049709 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.063051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.076795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.094231 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241824 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344899 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344961 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448547 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654451 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.747840 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:16 crc kubenswrapper[4907]: E0127 18:07:16.748144 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757727 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.763110 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:17:28.554729222 +0000 UTC Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862244 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965077 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965186 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171510 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171540 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171640 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274628 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377406 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480974 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480991 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584674 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688833 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.747722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.747779 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.747897 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.748024 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.748180 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.748470 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.771904 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:40:52.17760234 +0000 UTC Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791758 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897858 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897922 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.001010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.001028 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104061 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104087 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206495 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206552 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309872 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309910 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412981 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516363 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516380 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619451 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723254 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723327 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723403 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.747886 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:18 crc kubenswrapper[4907]: E0127 18:07:18.748083 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.772943 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:06:48.718792183 +0000 UTC Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826844 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826969 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.930961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931043 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931141 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034483 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138493 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241412 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241422 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241439 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241450 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.344955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345011 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345027 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345043 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447573 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447629 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447677 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551244 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551389 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.747547 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.747805 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.747872 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.748046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.748091 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.748185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756681 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756736 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.774144 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:58:11.837025043 +0000 UTC Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860441 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963213 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963244 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065295 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168319 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168411 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271470 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271487 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374896 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478254 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478272 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581706 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.684962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685082 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.747868 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:20 crc kubenswrapper[4907]: E0127 18:07:20.748091 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.775372 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:24:39.133503254 +0000 UTC Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787921 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787977 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993856 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098666 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201816 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.303958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304045 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406762 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406893 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716485 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747379 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747420 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747529 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.747773 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.748173 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.748442 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.775498 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:30:21.957651324 +0000 UTC Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.819005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.819018 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922361 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025794 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129412 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232822 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542672 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542747 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646836 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.748011 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:22 crc kubenswrapper[4907]: E0127 18:07:22.748214 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.749386 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:22 crc kubenswrapper[4907]: E0127 18:07:22.749762 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750393 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.776191 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:04:57.643658389 +0000 UTC Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957469 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060603 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060618 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164092 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164139 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266826 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475962 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579450 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579468 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579513 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682987 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684725 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.704485 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709707 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.725123 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730517 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748094 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748217 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748316 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748450 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748691 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.753001 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757894 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757973 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.773120 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.776505 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:54:17.272510077 +0000 UTC Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778192 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.799165 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.799401 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801326 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905165 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905197 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008485 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.111018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.111034 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213570 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213728 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419686 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419780 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522836 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626054 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626078 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729497 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729606 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729642 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729701 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.748494 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:24 crc kubenswrapper[4907]: E0127 18:07:24.748758 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.777674 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:31:36.481462642 +0000 UTC Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832877 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832899 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939586 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042619 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.145986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146058 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146149 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248506 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.351823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.351895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352075 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455893 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662628 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662708 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662789 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748101 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748216 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748352 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767990 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.768056 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.768442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.779424 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:22:54.561760609 +0000 UTC Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.785154 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.808593 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.839673 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.864352 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877516 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.886689 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.907779 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.923782 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.947538 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.960835 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981985 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981299 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.000395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.018222 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.032859 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.046690 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.066098 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084446 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084771 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.105326 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.121905 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.187938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.187996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188065 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290792 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290817 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.393906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.393983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394044 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497734 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.602015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.602033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705799 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.747443 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:26 crc kubenswrapper[4907]: E0127 18:07:26.747719 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.780284 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:27:33.654314233 +0000 UTC Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808495 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911719 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911785 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015192 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015218 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015237 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117392 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220289 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220305 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220333 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322282 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322419 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425395 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528540 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528587 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632123 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632196 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.747948 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.747966 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748254 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.748307 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748409 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748957 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.780916 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:27:16.21356946 +0000 UTC Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838691 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941791 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045625 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148862 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251888 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.354917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.354983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355048 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457974 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561362 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561408 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664225 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664271 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.747405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:28 crc kubenswrapper[4907]: E0127 18:07:28.747620 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.781678 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:44:29.572101043 +0000 UTC Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973849 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973891 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181037 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181093 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181116 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181166 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284583 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284712 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.324389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.324792 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.324958 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:33.324901136 +0000 UTC m=+168.454183788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388348 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388710 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491708 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491884 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.595949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699262 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699282 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.747655 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.747820 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.747933 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.748005 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.748231 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.748433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.781897 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:21:08.256907031 +0000 UTC Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803277 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803313 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803335 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.024980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025188 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025227 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129542 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232826 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334706 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437607 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437688 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541171 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541202 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644490 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.747080 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:30 crc kubenswrapper[4907]: E0127 18:07:30.747296 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.782884 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:28:42.231426672 +0000 UTC Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.851917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852080 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955223 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955978 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.956113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.956231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.060855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061430 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.062031 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165386 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165409 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269398 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371735 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.473946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.473992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577332 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680999 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.747828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.748040 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748283 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.748349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748712 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748813 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.783090 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:40:56.331059721 +0000 UTC Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784842 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784863 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888464 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888534 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.992813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.994015 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097167 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.199793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200354 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303395 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406967 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613214 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716891 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.747480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:32 crc kubenswrapper[4907]: E0127 18:07:32.747777 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.784373 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:21:24.321946899 +0000 UTC Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820402 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.923688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924962 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028162 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131182 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234654 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439663 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543530 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543595 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647212 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747052 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747304 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747238 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747609 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749547 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749576 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.784686 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:12:16.268155149 +0000 UTC Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852494 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.001104 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps"] Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.001522 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.003607 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.004992 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.005105 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.005128 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.039602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n4rxh" podStartSLOduration=84.039545505 podStartE2EDuration="1m24.039545505s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.039321979 +0000 UTC m=+109.168604591" watchObservedRunningTime="2026-01-27 18:07:34.039545505 +0000 UTC m=+109.168828137" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075246 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075610 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075825 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076037 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076163 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076589 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" podStartSLOduration=84.076575616 podStartE2EDuration="1m24.076575616s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.059206204 +0000 UTC m=+109.188488886" watchObservedRunningTime="2026-01-27 18:07:34.076575616 +0000 UTC m=+109.205858228" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.110547 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.110517448 podStartE2EDuration="1m27.110517448s" podCreationTimestamp="2026-01-27 18:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.109361485 +0000 UTC m=+109.238644117" watchObservedRunningTime="2026-01-27 18:07:34.110517448 +0000 UTC m=+109.239800070" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177736 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177838 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177859 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.178850 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.178911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.179158 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.181511 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" podStartSLOduration=83.181486321 podStartE2EDuration="1m23.181486321s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.180393539 +0000 UTC m=+109.309676151" watchObservedRunningTime="2026-01-27 18:07:34.181486321 +0000 UTC m=+109.310768933" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.194659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.200086 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.238552 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.238532631 podStartE2EDuration="1m28.238532631s" podCreationTimestamp="2026-01-27 18:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.22432741 +0000 UTC m=+109.353610072" watchObservedRunningTime="2026-01-27 18:07:34.238532631 +0000 UTC m=+109.367815253" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.267315 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.267292293 podStartE2EDuration="1m26.267292293s" podCreationTimestamp="2026-01-27 18:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.253503974 +0000 UTC m=+109.382786606" watchObservedRunningTime="2026-01-27 18:07:34.267292293 +0000 UTC m=+109.396574915" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.267921 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podStartSLOduration=84.267915931 podStartE2EDuration="1m24.267915931s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.266693165 +0000 UTC m=+109.395975797" watchObservedRunningTime="2026-01-27 18:07:34.267915931 +0000 UTC m=+109.397198553" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.282731 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.282704648 podStartE2EDuration="54.282704648s" podCreationTimestamp="2026-01-27 18:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.281817213 +0000 UTC m=+109.411099835" watchObservedRunningTime="2026-01-27 18:07:34.282704648 +0000 UTC m=+109.411987260" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.309818 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fgtpz" podStartSLOduration=84.309793252 podStartE2EDuration="1m24.309793252s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.309472203 +0000 UTC m=+109.438754825" watchObservedRunningTime="2026-01-27 18:07:34.309793252 +0000 UTC m=+109.439075864" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.309982 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.309976197 podStartE2EDuration="30.309976197s" podCreationTimestamp="2026-01-27 18:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.293623134 +0000 UTC m=+109.422905766" watchObservedRunningTime="2026-01-27 18:07:34.309976197 +0000 UTC m=+109.439258809" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.319841 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.371518 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9plnb" podStartSLOduration=84.371484366 podStartE2EDuration="1m24.371484366s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.370215929 +0000 UTC m=+109.499498541" watchObservedRunningTime="2026-01-27 18:07:34.371484366 +0000 UTC m=+109.500767018" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.747733 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:34 crc kubenswrapper[4907]: E0127 18:07:34.748710 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.784886 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:52:44.005794636 +0000 UTC Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.784951 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.795056 4907 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.347388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" event={"ID":"25a08cad-2677-40b2-95d1-727093d151cc","Type":"ContainerStarted","Data":"d636975a3e7b7523a8a6bd3240cca4a1307763cb7e68116c4578ffc2d3a28180"} Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.347483 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" event={"ID":"25a08cad-2677-40b2-95d1-727093d151cc","Type":"ContainerStarted","Data":"ef2aa78c4235d9dc2e37fafb5f064e6b970349ac342f9df1ccef8615d05840be"} Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.370516 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" podStartSLOduration=85.370496641 podStartE2EDuration="1m25.370496641s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:35.369897094 +0000 UTC m=+110.499179736" watchObservedRunningTime="2026-01-27 18:07:35.370496641 +0000 UTC m=+110.499779253" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748053 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.749373 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.749654 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.751373 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.751672 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.750153 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:36 crc kubenswrapper[4907]: I0127 18:07:36.747803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:36 crc kubenswrapper[4907]: E0127 18:07:36.748299 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747467 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747596 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747604 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747733 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747878 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747970 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:38 crc kubenswrapper[4907]: I0127 18:07:38.747602 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:38 crc kubenswrapper[4907]: E0127 18:07:38.747747 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747412 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747453 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747592 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747618 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747779 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747879 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:40 crc kubenswrapper[4907]: I0127 18:07:40.747937 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:40 crc kubenswrapper[4907]: E0127 18:07:40.748508 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748137 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748249 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.748472 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.748679 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.749000 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:42 crc kubenswrapper[4907]: I0127 18:07:42.747801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:42 crc kubenswrapper[4907]: E0127 18:07:42.747938 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747528 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747575 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747748 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747761 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747854 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:44 crc kubenswrapper[4907]: I0127 18:07:44.747717 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:44 crc kubenswrapper[4907]: E0127 18:07:44.747960 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.382440 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383289 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383376 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" exitCode=1 Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383422 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505"} Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383479 4907 scope.go:117] "RemoveContainer" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.384097 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.384382 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.737315 4907 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.747728 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.747764 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.749658 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.750038 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.749800 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.750309 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.882259 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:46 crc kubenswrapper[4907]: I0127 18:07:46.390218 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:46 crc kubenswrapper[4907]: I0127 18:07:46.747294 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:46 crc kubenswrapper[4907]: E0127 18:07:46.747445 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.747977 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748167 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.748438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748523 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.748831 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748980 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:48 crc kubenswrapper[4907]: I0127 18:07:48.748049 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:48 crc kubenswrapper[4907]: E0127 18:07:48.748276 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:48 crc kubenswrapper[4907]: I0127 18:07:48.749754 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.401443 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.404662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3"} Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.405151 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.432023 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podStartSLOduration=99.431989622 podStartE2EDuration="1m39.431989622s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:49.430967242 +0000 UTC m=+124.560249874" watchObservedRunningTime="2026-01-27 18:07:49.431989622 +0000 UTC m=+124.561272274" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.719374 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.719525 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.719687 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747315 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747494 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747657 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747780 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:50 crc kubenswrapper[4907]: E0127 18:07:50.883477 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747721 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747950 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.747954 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748078 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748256 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748412 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.747911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748059 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748271 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748323 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748423 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748462 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748596 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748638 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.747988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748033 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751674 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751771 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.752293 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.884823 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:56 crc kubenswrapper[4907]: I0127 18:07:56.748693 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.438431 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.438525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4"} Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747605 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747698 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.747753 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747606 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.747901 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747957 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.748010 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.748058 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747210 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747260 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747312 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747211 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747409 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747621 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747702 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747258 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747315 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747470 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747849 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.750528 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.750708 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.751846 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752158 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752308 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.445731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.496204 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.497881 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.503700 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.504071 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.505189 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.505828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506018 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506040 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506782 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507156 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507384 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.512868 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.513732 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.513942 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.514414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.514590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.515146 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.518444 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5d442"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.519290 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.519928 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.520104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.521936 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.521962 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.544865 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545086 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.552790 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.553132 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.553683 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.555997 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556206 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.557845 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558198 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558695 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558780 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558832 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558918 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558989 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559074 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559131 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559234 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559339 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559516 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559729 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.560017 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.560240 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561036 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561246 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561313 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561258 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561490 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561259 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561862 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562062 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562127 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562991 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563214 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563498 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563675 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.564103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.564314 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.565266 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.567662 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.568117 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.568952 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.569229 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.569902 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.570475 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.570894 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.571649 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.571880 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572011 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572170 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572222 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572356 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572601 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572951 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.573022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.573958 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.576692 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.576906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.578676 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.579919 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.580431 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.580625 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.582095 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.582412 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h72cm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.601392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.604347 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.604832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.605053 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.605359 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.622095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627067 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627654 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627976 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628133 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628274 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.629658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630071 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630314 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630437 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630732 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631117 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631295 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631910 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632164 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632617 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632768 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632999 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633157 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633291 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633319 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.634709 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635219 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635482 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635770 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635899 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.638479 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.639620 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640179 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640274 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640498 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.641093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.642538 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.642929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643336 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643588 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643583 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643734 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643748 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643878 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643986 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.644098 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.645912 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.645920 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646236 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646702 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.648760 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.649263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.649433 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.651664 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.652336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.652539 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653909 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653982 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654000 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654015 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654046 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654061 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654076 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654157 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654172 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654204 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654224 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654240 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654256 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654272 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654289 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654306 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654321 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654351 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654397 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654468 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654499 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654610 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654625 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654656 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654671 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654685 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654700 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654852 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654883 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654896 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654911 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654955 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655035 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655153 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655264 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655293 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655339 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655440 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655474 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655516 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655542 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655598 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655624 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.660637 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.662320 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.663568 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.664133 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.664255 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.666771 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.667451 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.667718 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.668220 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.668519 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.669114 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.670289 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.670717 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.671199 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.671508 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.672131 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.672470 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.673696 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674063 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674084 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.677012 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688238 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688379 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688836 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.691263 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.696986 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.697657 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.698733 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.700233 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.700441 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.702269 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.703248 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.705248 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.706902 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.708383 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.709886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.711304 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.713871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.713966 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.714981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.721796 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.723985 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.725304 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.726482 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.736448 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.736797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.737787 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.738885 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.740193 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.740647 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.741428 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.742479 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.744108 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.746194 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.747910 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.749054 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.750222 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.751454 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.752743 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.754107 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.755220 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756438 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756530 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756656 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756685 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756711 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756789 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756851 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756923 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757037 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757109 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757157 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757180 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757204 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757227 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757250 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757351 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757375 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757403 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757602 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757687 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757738 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757756 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757765 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757852 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757934 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757961 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757995 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758053 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758077 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758103 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758125 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758148 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758171 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758265 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758340 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758486 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758535 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758717 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759542 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759905 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.760013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.760193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.761403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.761720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762458 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762518 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762518 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763069 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763071 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764324 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764424 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764855 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764943 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765039 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765248 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765607 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763361 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766084 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766137 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766468 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.767171 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768107 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768146 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768237 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768245 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.767193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768490 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768686 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.769021 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.769781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770148 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770301 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770485 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770963 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.771630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.771676 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.772032 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.772980 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773295 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773827 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.774599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.775002 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.776174 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.777322 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.778583 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bvqd5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784385 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784402 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785222 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785882 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786303 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786735 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.788150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.788299 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.794002 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.800925 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.820051 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.841077 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.860689 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.881450 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.902267 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.907832 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.923091 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.941388 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.962170 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.981285 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.020839 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.025693 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.040430 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.061776 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.072694 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.082323 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.101597 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.121478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.141244 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.160957 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.182595 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.200842 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.220978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.243881 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.261988 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.283401 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.302390 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.322109 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.341680 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.360877 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.393884 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.402498 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.421965 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.441002 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.461421 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.492469 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.502340 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.521326 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.541215 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.561181 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.581693 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.600963 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.621190 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.641453 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.662126 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.679979 4907 request.go:700] Waited for 1.015244823s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.681923 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.702123 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.720708 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.741240 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.762239 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.780938 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.803250 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.821658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.841634 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.861784 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.881066 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.901407 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.920228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.941951 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.960956 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.982183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.001456 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.021205 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.042155 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.062478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.082247 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.103084 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.122720 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.142149 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.162013 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.180983 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.202357 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.221054 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.241975 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.261391 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.282207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.301069 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.320995 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.389813 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.407538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.424174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.439094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.441250 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.461894 4907 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.463819 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.480636 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.485069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:06 crc kubenswrapper[4907]: W0127 18:08:06.500751 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb98d017_ae04_4e9d_9b9f_dde9530b7acf.slice/crio-79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8 WatchSource:0}: Error finding container 79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8: Status 404 returned error can't find the container with id 79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8 Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.518642 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.538678 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.554875 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.585007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.612153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.617107 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.632445 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.635648 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.659345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.668366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.669213 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.676591 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.699051 4907 request.go:700] Waited for 1.930337871s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.699608 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.705301 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.713771 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.726442 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:06 crc kubenswrapper[4907]: W0127 18:08:06.740950 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2d359e7_9de4_4357_ae4c_8da07c1a880c.slice/crio-6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c WatchSource:0}: Error finding container 6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c: Status 404 returned error can't find the container with id 6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.741861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.746735 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.757774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.774799 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.784005 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.784232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.798437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.800827 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.807834 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.815097 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.820869 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.826835 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.831014 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.841964 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.848939 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.851709 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.852707 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.862237 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.867803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.880942 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.903183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.919600 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.977192 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989597 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989718 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989790 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989863 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989917 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989954 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990017 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990084 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990130 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990565 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990657 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990708 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990757 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990806 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992176 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992262 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992585 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992637 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993003 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993139 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993186 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993209 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993317 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993711 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993773 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996810 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996835 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996970 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997427 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997541 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997619 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997826 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997855 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998607 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998672 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.998912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.998980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999085 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999248 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999312 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999351 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999423 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999613 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999727 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.000004 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.002700 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.502666595 +0000 UTC m=+142.631949257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: W0127 18:08:07.009211 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd427ba67_a9ef_41ef_a2f3_fbe9eb87a69e.slice/crio-f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02 WatchSource:0}: Error finding container f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02: Status 404 returned error can't find the container with id f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02 Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101607 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101789 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101929 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102042 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102164 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102211 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102229 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102245 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102345 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102435 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102472 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102487 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102503 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102518 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102542 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102659 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102689 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.102803 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.602781399 +0000 UTC m=+142.732064011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104493 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104572 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104608 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104660 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104675 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104693 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104714 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104732 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104750 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104788 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104808 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104827 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104845 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104889 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104909 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104941 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104958 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104990 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105027 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105046 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105330 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105376 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105391 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.106097 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.106170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.108192 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.109656 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.110136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.113662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.113752 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.114331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.115421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.116717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.117429 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.117508 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.119094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.120944 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.121601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.122004 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.124876 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.125298 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.125909 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.127213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.129602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.130397 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.131284 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.136513 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.136888 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.137148 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.139758 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.140166 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.140885 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141011 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.142152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.143104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.143700 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.144878 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.145388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162211 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162291 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162425 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162466 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162654 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.163445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.167338 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.168415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.172167 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.174134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.184633 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.190407 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.207873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208012 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208102 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208310 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208370 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208482 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209003 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209406 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.209642 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.709629182 +0000 UTC m=+142.838911794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.217075 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.227085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.227244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.231088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.240834 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.257045 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.264731 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.268912 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.282053 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.282986 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.297246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.309278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.309595 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.809543839 +0000 UTC m=+142.938826461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.310140 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.310934 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.810821337 +0000 UTC m=+142.940103949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.322088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.331000 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.341724 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.346161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.358409 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.360352 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.380056 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.382910 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: W0127 18:08:07.396187 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38363947_4768_44b8_b3fe_f7b5b482da55.slice/crio-558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef WatchSource:0}: Error finding container 558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef: Status 404 returned error can't find the container with id 558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.396844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.399245 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.408517 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.411489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.411653 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.911628511 +0000 UTC m=+143.040911123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.411710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.412049 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.912041534 +0000 UTC m=+143.041324136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.486262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.486718 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.487088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.506871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.506890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.508905 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.512282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.512337 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"d2a5d5dce677965b17c0ca35c20daae0415842ae46a59912803764ae0ae6a316"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513305 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.513395 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.013374843 +0000 UTC m=+143.142657455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513949 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.514279 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.014267809 +0000 UTC m=+143.143550411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517128 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517162 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517763 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"32f9fc7b4aa47ac4989c51f777b9f45caaf74a3f6d839b65c03c029afd8ca470"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.520376 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.526207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerStarted","Data":"53f1bb78246a95f04a0e3a59320d7de5b66a380634a406b2deaad462424ff23c"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.526841 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.527483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.532417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" event={"ID":"1c678cbb-a03d-4ed8-85bd-befc2884454e","Type":"ContainerStarted","Data":"d99c210a2afd1a8dd3ab7ad3937e9daba804e9a0fbf6ebffd2362282c67ee2e1"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.533041 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.536751 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.536779 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.537684 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.537707 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.538058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"7e11b6f7056136bab0dac9410bba8d691c4dd67358da145faaeef6657053eabb"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.539627 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerStarted","Data":"e90ccbb7d1f9506a1d7c5832c29bc196054837999c5a0a011ef29e54c9ff8054"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.539812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.547655 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.548935 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h72cm" event={"ID":"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e","Type":"ContainerStarted","Data":"4a63025614479ee19d91ddae3c53b6b1b161f48d3ae54e551048bae3e81386a3"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.548960 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h72cm" event={"ID":"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e","Type":"ContainerStarted","Data":"f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.552783 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" event={"ID":"28a88f65-9871-4372-b728-ed61f22642e4","Type":"ContainerStarted","Data":"6f4c2bf3bee5eb016a2fe2297cdf16be3247579fde93f61857aa0e5fd2f98c42"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.554615 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.557694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.559074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.560328 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerStarted","Data":"4a8097cce43ecee42c97c1d9ab5869697b268e0b34ef8036d5f9d6948ff49dc9"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.562289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"7bac43dbe37ef69aab73ceb84da67c628af00be47c258c1c351f33f09618fc07"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568121 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"8879a2adf5d3fbe3e0ecf787399134e695e87c642d46480f76194b3a13bbe9f6"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.569919 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.578419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.578465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.581229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.583299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.593523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.604000 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.616095 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.616124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.618647 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.618692 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.624356 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.625695 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.125658527 +0000 UTC m=+143.254941139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.626170 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.628530 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.128513651 +0000 UTC m=+143.257796263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.634908 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.635137 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.637245 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.650299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.650427 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.657632 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.666087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.676255 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.690580 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.696221 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.722279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.727345 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.727723 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.227700057 +0000 UTC m=+143.356982669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.727855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.730842 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.23082592 +0000 UTC m=+143.360108532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.733435 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.752858 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.759599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.813058 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.825605 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.829614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.829736 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.329717137 +0000 UTC m=+143.458999749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.830009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.830284 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.330277644 +0000 UTC m=+143.459560256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.850721 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.902433 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.929775 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.929843 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.931044 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.931460 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.431441598 +0000 UTC m=+143.560724210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.018666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.033123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.033515 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.533501039 +0000 UTC m=+143.662783651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.134816 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.135214 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.635195359 +0000 UTC m=+143.764477971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.139748 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.163582 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.239589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.240173 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.740155176 +0000 UTC m=+143.869437788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.340716 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.341044 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.841016551 +0000 UTC m=+143.970299163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.342501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.345981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.374039 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.389837 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.414349 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.441638 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.443198 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.943185186 +0000 UTC m=+144.072467798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.539357 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podStartSLOduration=118.539336932 podStartE2EDuration="1m58.539336932s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.533481958 +0000 UTC m=+143.662764570" watchObservedRunningTime="2026-01-27 18:08:08.539336932 +0000 UTC m=+143.668619544" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.543884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.544186 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.044166325 +0000 UTC m=+144.173448937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.587866 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.595366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" event={"ID":"28a88f65-9871-4372-b728-ed61f22642e4","Type":"ContainerStarted","Data":"990fbac1ae65dd5c009c4ec7618eb00ba55a0f5e47ef13c2055df017e2eb8f65"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.596834 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" event={"ID":"3d3d480c-01ea-4ec4-b238-16e70bb9caff","Type":"ContainerStarted","Data":"9b6a6e0d3557e333aa6ebc3f09b5aeadb700791a26b82b9b31492ffc5a9e2a83"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.601181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvqd5" event={"ID":"f19550a4-d60c-4d8b-ae24-8b43c7b83736","Type":"ContainerStarted","Data":"aec38663e8358c263b3871f77568e546efde6179ff58c52e07c1980a44d087db"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.607453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" event={"ID":"daf23b11-96d6-4c77-8145-b7928844bd5e","Type":"ContainerStarted","Data":"7d891d167b39c7dc384ec00211c7278855d3352fe73863bf1aa6bfb697a86351"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.612222 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.614858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"87b7ad9b3a9ce5e3199bc7b27d8d27ddbefb2f3ab58c3ef32127f866fb2e00bf"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.626280 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"f1c988b6bdba0916d98a1428a297da63f92395f1b3b10fc521f23ae90ce2273c"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.629861 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" event={"ID":"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab","Type":"ContainerStarted","Data":"001ba9c6a78fd8a0b8e310382e548650245bfd340ae5d0347a220dc47a737b5b"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.630704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerStarted","Data":"ab8e4f87a2ffbe236cf7a8b9faa6044f734bcd783a8ccf483230babd8b2d0aab"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.631445 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" event={"ID":"178c40d2-9468-43b5-b33b-f95b60268091","Type":"ContainerStarted","Data":"c7087ef47743d40ae48aec5afab43cef6549f9764b0752cf7289d5e27de0e427"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.633137 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" event={"ID":"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56","Type":"ContainerStarted","Data":"8b21840fe73506f9ce9256b1be11ad3a0391c72b45cbea4c1dbd87ae175b38d7"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.634655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.636070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"b8952e73f83f3de3560dfe525a09e2a1044efd2da33cce2c4c3d904ba116c7f5"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.639221 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerStarted","Data":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.639310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerStarted","Data":"4f4e687b11dd2ca7eb21e3c540aa81cbaa9c488161aa4b888533995942e8fa1a"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.640533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" event={"ID":"d667690f-b387-424c-b130-e50277eaa0c4","Type":"ContainerStarted","Data":"f58df230b00f18699127081b816a50d8cd0c814c5b3bca7b53e9bdf6c934eee1"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.643128 4907 generic.go:334] "Generic (PLEG): container finished" podID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerID="f06976dfcd4d145633eb1bc145b26b5d9bd8dea20c28526e934bbfc3f6bde8ff" exitCode=0 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.643378 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerDied","Data":"f06976dfcd4d145633eb1bc145b26b5d9bd8dea20c28526e934bbfc3f6bde8ff"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.645992 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.649286 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.149271126 +0000 UTC m=+144.278553738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.649999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"438542bcaf81b0c512cc8a170693d4ff4fdb06e0cff8121934c77b1b1bc61114"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.655984 4907 generic.go:334] "Generic (PLEG): container finished" podID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerID="b92b7c2e43573dc8728d927bcac289f984bfbe45c4a3fe1432917c4917be66f5" exitCode=0 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.656104 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerDied","Data":"b92b7c2e43573dc8728d927bcac289f984bfbe45c4a3fe1432917c4917be66f5"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.657084 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-h79fx" podStartSLOduration=118.657068567 podStartE2EDuration="1m58.657068567s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.655918483 +0000 UTC m=+143.785201095" watchObservedRunningTime="2026-01-27 18:08:08.657068567 +0000 UTC m=+143.786351179" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.658281 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerStarted","Data":"ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.659313 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.659402 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.666056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerStarted","Data":"764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.669868 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" event={"ID":"1c678cbb-a03d-4ed8-85bd-befc2884454e","Type":"ContainerStarted","Data":"66d15642d3d727c638457f6bc9f91c0efdde17fb6615afcd4b939c46af04e15f"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.670393 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.670427 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.672102 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" event={"ID":"decaba3c-d32c-4a1d-b413-52c195883560","Type":"ContainerStarted","Data":"fcfbfad960f1a107e7b6ca84067670b60a3eef9dfe481aa3d4549fcfe71c6cfd"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.675540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"26cf2d10f39448594d8ec1b0ecc447533892109d73f76448a5c077f2335c5d11"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.677366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" event={"ID":"42d77196-c327-47c3-8713-d23038a08e13","Type":"ContainerStarted","Data":"c6299b2cc40813b33ac19a245a355c70688d86aa8a66a7eec2b95e4040aadd6f"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680053 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" event={"ID":"b66d56fc-163d-469a-8a47-a3e1462b1af8","Type":"ContainerStarted","Data":"8c51ad7b9f37a6e7fa7e5aa89b67c1dfe374160dfa7f0dde2ba26e75fc5fb6d4"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680823 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680878 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.737115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.739698 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.746805 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.747532 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.247483513 +0000 UTC m=+144.376766125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.747907 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.755425 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.255401338 +0000 UTC m=+144.384683950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.803590 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h72cm" podStartSLOduration=118.803532002 podStartE2EDuration="1m58.803532002s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.775682148 +0000 UTC m=+143.904964760" watchObservedRunningTime="2026-01-27 18:08:08.803532002 +0000 UTC m=+143.932814614" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.809861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.814182 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.815999 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.819047 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.850162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.850523 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.350502493 +0000 UTC m=+144.479785105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.854484 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.854543 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: W0127 18:08:08.874282 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea5acd47_9e68_4600_beff_4ad9454dde7a.slice/crio-3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484 WatchSource:0}: Error finding container 3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484: Status 404 returned error can't find the container with id 3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484 Jan 27 18:08:08 crc kubenswrapper[4907]: W0127 18:08:08.877174 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca8f687_0e6e_4df7_8dc1_0bb597588b6d.slice/crio-18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105 WatchSource:0}: Error finding container 18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105: Status 404 returned error can't find the container with id 18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.889521 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.902370 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.951097 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.952089 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.452074819 +0000 UTC m=+144.581357431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.952277 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.953593 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.975772 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.007327 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5997de10_6cbe_4099_aa7f_4f50effd0c4e.slice/crio-2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0 WatchSource:0}: Error finding container 2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0: Status 404 returned error can't find the container with id 2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0 Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.020651 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f465d65_342c_410f_9374_d8c5ac6f03e0.slice/crio-ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b WatchSource:0}: Error finding container ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b: Status 404 returned error can't find the container with id ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.033003 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod831e1c4c_ecd4_4617_ab4a_37acc328a062.slice/crio-9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7 WatchSource:0}: Error finding container 9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7: Status 404 returned error can't find the container with id 9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7 Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.052771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.053180 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.553159252 +0000 UTC m=+144.682441864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.158456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.158817 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.658802039 +0000 UTC m=+144.788084651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.212906 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podStartSLOduration=119.21288791 podStartE2EDuration="1m59.21288791s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.212169489 +0000 UTC m=+144.341452101" watchObservedRunningTime="2026-01-27 18:08:09.21288791 +0000 UTC m=+144.342170522" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.256787 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podStartSLOduration=119.256764159 podStartE2EDuration="1m59.256764159s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.252500862 +0000 UTC m=+144.381783474" watchObservedRunningTime="2026-01-27 18:08:09.256764159 +0000 UTC m=+144.386046771" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.259931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.260284 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.760262502 +0000 UTC m=+144.889545114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.361241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.361679 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.861662474 +0000 UTC m=+144.990945086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.371707 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" podStartSLOduration=119.371687481 podStartE2EDuration="1m59.371687481s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.340946451 +0000 UTC m=+144.470229063" watchObservedRunningTime="2026-01-27 18:08:09.371687481 +0000 UTC m=+144.500970093" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.462225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.462407 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.962375365 +0000 UTC m=+145.091657977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.462849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.463266 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.963248881 +0000 UTC m=+145.092531493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.542219 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" podStartSLOduration=119.542192998 podStartE2EDuration="1m59.542192998s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.540413405 +0000 UTC m=+144.669696037" watchObservedRunningTime="2026-01-27 18:08:09.542192998 +0000 UTC m=+144.671475610" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.591068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.591581 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.091541119 +0000 UTC m=+145.220823731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.693722 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.694152 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.194120315 +0000 UTC m=+145.323402927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.719827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.733250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"7394209dadc4db9fff250a093b11268de96228180a44ce714a4ee786da97c7d7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.743110 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"45e9f53b4855567813ba17c6a76068635f9f7993467db5063abdc60aaa9036cd"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" event={"ID":"434d6d34-127a-4de6-8f5c-6ea67008f70a","Type":"ContainerStarted","Data":"356df355c8127a88218540166db9f749a519a94b16bf1b196497ca93d12953da"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745095 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" event={"ID":"434d6d34-127a-4de6-8f5c-6ea67008f70a","Type":"ContainerStarted","Data":"c4e6fa1620ba5175973c824748e1401fdecac7f0e0317ccf17b38b04dcd9c542"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.753738 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.753795 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.782230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerStarted","Data":"048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.782270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvqd5" event={"ID":"f19550a4-d60c-4d8b-ae24-8b43c7b83736","Type":"ContainerStarted","Data":"1be134f3bfbb3dc60a4950829058b394c29cdc79afc1b884fd9097939196b1bc"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.801342 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podStartSLOduration=118.801319868 podStartE2EDuration="1m58.801319868s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.801193535 +0000 UTC m=+144.930476147" watchObservedRunningTime="2026-01-27 18:08:09.801319868 +0000 UTC m=+144.930602480" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.801753 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.801819 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.301805203 +0000 UTC m=+145.431087815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.811412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.812111 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.312092767 +0000 UTC m=+145.441375379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.826447 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bvqd5" podStartSLOduration=5.8264235509999995 podStartE2EDuration="5.826423551s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.823510295 +0000 UTC m=+144.952792897" watchObservedRunningTime="2026-01-27 18:08:09.826423551 +0000 UTC m=+144.955706163" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.848728 4907 csr.go:261] certificate signing request csr-bg9tz is approved, waiting to be issued Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.849815 4907 csr.go:257] certificate signing request csr-bg9tz is issued Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.858731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"a0a5c68d34aa2349d3f473e6ebcae9439f2dc066fca2284c2289910b04a0d052"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.865682 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:09 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:09 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:09 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.865788 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.892038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" event={"ID":"b66d56fc-163d-469a-8a47-a3e1462b1af8","Type":"ContainerStarted","Data":"368ea393733f1a4c1077c38caa10663b2ce22f7cddeb3632d1b4728e87864a5c"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.894067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" event={"ID":"3d3d480c-01ea-4ec4-b238-16e70bb9caff","Type":"ContainerStarted","Data":"8a4373e23021954161d7f4a727d5a43d9b095b9295620195b1203d502f066d42"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.904999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" event={"ID":"981b1dce-6375-4c49-9b16-144c98fc886c","Type":"ContainerStarted","Data":"e8792302a3e200cd9d7cc033f9b5e511edcc72f5c3187b6dd09723e3bf83589f"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.915840 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.916263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" podStartSLOduration=119.91624621 podStartE2EDuration="1m59.91624621s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.915270301 +0000 UTC m=+145.044552903" watchObservedRunningTime="2026-01-27 18:08:09.91624621 +0000 UTC m=+145.045528822" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.917379 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.417355063 +0000 UTC m=+145.546637675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.918057 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.937693 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerStarted","Data":"5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.939837 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.941203 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.941241 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.943252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" event={"ID":"decaba3c-d32c-4a1d-b413-52c195883560","Type":"ContainerStarted","Data":"8112c09c6fd4a12759a13e8be41678b9498286bbd7122b30c9adada2bcb74e23"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.009720 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" podStartSLOduration=119.009697987 podStartE2EDuration="1m59.009697987s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.973128124 +0000 UTC m=+145.102410736" watchObservedRunningTime="2026-01-27 18:08:10.009697987 +0000 UTC m=+145.138980599" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.011861 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" podStartSLOduration=119.01185388 podStartE2EDuration="1m59.01185388s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.005212224 +0000 UTC m=+145.134494836" watchObservedRunningTime="2026-01-27 18:08:10.01185388 +0000 UTC m=+145.141136492" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.013315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"751f6790eddcfff181547cb7090e8c80fd9fdf4c4aa3c45b341c6ab12bb2cee7"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.014669 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.017018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.020246 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.520230218 +0000 UTC m=+145.649512830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.026197 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podStartSLOduration=119.026177814 podStartE2EDuration="1m59.026177814s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.025620558 +0000 UTC m=+145.154903180" watchObservedRunningTime="2026-01-27 18:08:10.026177814 +0000 UTC m=+145.155460426" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.029571 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.029617 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.030922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.031807 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.033100 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.033125 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.044940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.061262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" event={"ID":"daf23b11-96d6-4c77-8145-b7928844bd5e","Type":"ContainerStarted","Data":"6736e245c631ab752552155875f3d8bed66efeb20071accd516574ec3c53c9da"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.072479 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podStartSLOduration=119.072459164 podStartE2EDuration="1m59.072459164s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.050327759 +0000 UTC m=+145.179610381" watchObservedRunningTime="2026-01-27 18:08:10.072459164 +0000 UTC m=+145.201741776" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.073621 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podStartSLOduration=119.073613858 podStartE2EDuration="1m59.073613858s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.071412683 +0000 UTC m=+145.200695325" watchObservedRunningTime="2026-01-27 18:08:10.073613858 +0000 UTC m=+145.202896470" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.082806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"6552446c5b4b19439a108a98e88ffb461a2c1e3e996f35ac7a40da27c783fd05"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.092826 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" event={"ID":"d0ed825a-5a7b-454e-80f7-5cfa3d459032","Type":"ContainerStarted","Data":"27aeacd9ac0b4134e47ebe6a06655e85d25b03cdfff3285ca2f5ff0cb845e200"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.100024 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" podStartSLOduration=120.10000184 podStartE2EDuration="2m0.10000184s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.099191616 +0000 UTC m=+145.228474228" watchObservedRunningTime="2026-01-27 18:08:10.10000184 +0000 UTC m=+145.229284462" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.104073 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" event={"ID":"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab","Type":"ContainerStarted","Data":"97f692e300917600042d52f72fab797a3d8d5bc1cca3c2f563b92cdddd9ea24d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.115475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" event={"ID":"42d77196-c327-47c3-8713-d23038a08e13","Type":"ContainerStarted","Data":"b66117fad3c7b38c73cadb10b3ac15033d3b42ac68331268ac63456de3c6b9ae"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.120699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.121937 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.621913288 +0000 UTC m=+145.751195900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.126918 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podStartSLOduration=119.126904276 podStartE2EDuration="1m59.126904276s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.122145535 +0000 UTC m=+145.251428167" watchObservedRunningTime="2026-01-27 18:08:10.126904276 +0000 UTC m=+145.256186888" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.150404 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"c3037282fd2c948713bcb1fb5de0e6880b822524d1ef4ee62860d63dcc553e41"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.152173 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" event={"ID":"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56","Type":"ContainerStarted","Data":"271a93d46916e2598f357502f32c17324daeb6631a24e3788b69844dfac7c454"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.153702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"f7bbcb0ec3067846d3bed3cdc9a8fde003e4cd4d011cd3657e3f646ac9b6876e"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.156063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"61ec2e1254000a48ca140c481bf406a9161e0e16da9e86559031e5ebc467417c"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.162839 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" podStartSLOduration=120.162815359 podStartE2EDuration="2m0.162815359s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.162009735 +0000 UTC m=+145.291292347" watchObservedRunningTime="2026-01-27 18:08:10.162815359 +0000 UTC m=+145.292097971" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.169512 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"72dfded737a3b5402054537e151e18c454369edd7634d4cdbb665a1aec580b1d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.183434 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" event={"ID":"d667690f-b387-424c-b130-e50277eaa0c4","Type":"ContainerStarted","Data":"933c2dd6f7e717ec6abace0a5e717c50f793a554ccfd3b64e57446afeb7ccc72"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.186305 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.187345 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.187381 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.193955 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" event={"ID":"178c40d2-9468-43b5-b33b-f95b60268091","Type":"ContainerStarted","Data":"a4ca03c4c2426d4159d8f34134cadb220757b80f28156a08285f10c201bce70b"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.200463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qtfgw" event={"ID":"5997de10-6cbe-4099-aa7f-4f50effd0c4e","Type":"ContainerStarted","Data":"2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.214976 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" podStartSLOduration=119.214957903 podStartE2EDuration="1m59.214957903s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.185344676 +0000 UTC m=+145.314627288" watchObservedRunningTime="2026-01-27 18:08:10.214957903 +0000 UTC m=+145.344240515" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.216915 4907 generic.go:334] "Generic (PLEG): container finished" podID="38363947-4768-44b8-b3fe-f7b5b482da55" containerID="61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d" exitCode=0 Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.220038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerDied","Data":"61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224440 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224509 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225083 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225167 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225403 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.226304 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.726290058 +0000 UTC m=+145.855572670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.245888 4907 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lg6ln container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.245942 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.246051 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.246171 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.249230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" podStartSLOduration=119.249212256 podStartE2EDuration="1m59.249212256s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.212874111 +0000 UTC m=+145.342156723" watchObservedRunningTime="2026-01-27 18:08:10.249212256 +0000 UTC m=+145.378494868" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.277766 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podStartSLOduration=119.277741391 podStartE2EDuration="1m59.277741391s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.2760171 +0000 UTC m=+145.405299712" watchObservedRunningTime="2026-01-27 18:08:10.277741391 +0000 UTC m=+145.407024003" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.279708 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" podStartSLOduration=120.279697449 podStartE2EDuration="2m0.279697449s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.250664249 +0000 UTC m=+145.379946861" watchObservedRunningTime="2026-01-27 18:08:10.279697449 +0000 UTC m=+145.408980071" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.308241 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" podStartSLOduration=120.308217303 podStartE2EDuration="2m0.308217303s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.30170968 +0000 UTC m=+145.430992292" watchObservedRunningTime="2026-01-27 18:08:10.308217303 +0000 UTC m=+145.437499915" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.329014 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.330443 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.83042336 +0000 UTC m=+145.959705972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.364620 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qtfgw" podStartSLOduration=6.36319128 podStartE2EDuration="6.36319128s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.327931807 +0000 UTC m=+145.457214409" watchObservedRunningTime="2026-01-27 18:08:10.36319128 +0000 UTC m=+145.492473892" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.379585 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" podStartSLOduration=120.379542294 podStartE2EDuration="2m0.379542294s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.364951192 +0000 UTC m=+145.494233804" watchObservedRunningTime="2026-01-27 18:08:10.379542294 +0000 UTC m=+145.508824906" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.430987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.431420 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.93140233 +0000 UTC m=+146.060684942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.443486 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podStartSLOduration=120.443466887 podStartE2EDuration="2m0.443466887s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.42095725 +0000 UTC m=+145.550239862" watchObservedRunningTime="2026-01-27 18:08:10.443466887 +0000 UTC m=+145.572749499" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.492435 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-grwdr" podStartSLOduration=120.492409075 podStartE2EDuration="2m0.492409075s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.446117635 +0000 UTC m=+145.575400257" watchObservedRunningTime="2026-01-27 18:08:10.492409075 +0000 UTC m=+145.621691697" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.532324 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.532748 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.032712588 +0000 UTC m=+146.161995210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.532889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.533401 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.033371908 +0000 UTC m=+146.162654520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.634619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.634733 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.134711878 +0000 UTC m=+146.263994490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.634902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.635214 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.135207552 +0000 UTC m=+146.264490164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.736772 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.736974 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.236933784 +0000 UTC m=+146.366216396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.737199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.737483 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.23747105 +0000 UTC m=+146.366753662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.837932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.838461 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.338442709 +0000 UTC m=+146.467725311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.850855 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 18:03:09 +0000 UTC, rotation deadline is 2026-10-22 23:57:43.249100769 +0000 UTC Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.850934 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6437h49m32.398170141s for next certificate rotation Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.854511 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:10 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:10 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:10 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.854602 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.940202 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.940643 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.440625463 +0000 UTC m=+146.569908075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.041619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.041832 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.541800018 +0000 UTC m=+146.671082630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.041939 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.042346 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.542331234 +0000 UTC m=+146.671613846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.143806 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.144011 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.643979973 +0000 UTC m=+146.773262585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.144115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.144577 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.644545629 +0000 UTC m=+146.773828241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.223204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"aed3f87f05b661174fc31cac2eb6f54fc32677dac4ae688ee496b5c8e8e7ce13"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.223250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"25e833908f081118d717b03f2b19f8aaf225ea8ed86c0ee59090108622487371"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.226006 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"8a47adad5e917e30457b41fed58f3ef8e65c07845f3812ee84f0a45d505023e6"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.226456 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.227972 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"04e4e680c43122d51096c187a384cfc29fedf39b745cda95753364e4b1496a2a"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.230666 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"506b31ee155e1bb451634dee66e8943aa67e6462728cf44d654c8d646b0ca2f3"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.230724 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"e2f891df33a718082dedb90fed0acc1f20496b6a841938f2b4fbabf5817a9341"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.232228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qtfgw" event={"ID":"5997de10-6cbe-4099-aa7f-4f50effd0c4e","Type":"ContainerStarted","Data":"70200604599a9563eaa08a7723ed1afc4736073249499a7c0657bb946942275b"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.234100 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.235154 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.235191 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.236062 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"f2666cfd77e1b691a60e5ec9529ca2d92988056a8afe19f485b55f4341b8afca"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.236109 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"4b7a80dc8ce421404587ea370c5379fd09f8a7f8a2e8aa219a6df7b1d653d4ea"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.237650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"d449f31194a913b1848bfc1a9395fea50eba99ab526b5e1290f17d71441602c7"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.237678 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"4708299511e8a5d9f150554b1504c61d714ad11cc142f42df40448eb5a48c33f"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.239475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" event={"ID":"981b1dce-6375-4c49-9b16-144c98fc886c","Type":"ContainerStarted","Data":"a8a7e68a1e2f50bdf7f17e555af8e56e9db1199f2e494c3ddbedfc61cd93cc03"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.241088 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"61bd66d5cc2d8b65e7e6fa0c72b92eb278365f98ac17f4debeb4cf95efa59393"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243635 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243690 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.244850 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.245003 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.744981103 +0000 UTC m=+146.874263715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.245161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.245451 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.745440696 +0000 UTC m=+146.874723308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.245949 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerStarted","Data":"52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.248760 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"02b070fc56aba7b64f6d5a1e5f16be5a2e96728797d79c1df27f0c3466b01b80"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.248802 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"bf4a4256435b0fd4a3c356ecbc3ebd238ce11e688210a2f920cf617bb8cf34c0"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.250701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"38b4eed8264775bbbf3e8d3d6a05bea19d2a64c42a151e40f70f518d125e2b0f"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.251160 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.253436 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"5960d5c470a01bc353a08f53d7146c8b2eff8357abebb583a1c08e8d5e2efeb9"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"444f3c824c48a375f8581dcc1eb8f5a798f2d0be5ed7bd3ff86803a8f65034ee"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255586 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"1b38c60f52621140e0c9a91375769eb94c0aab0298a05cb5008c3033b682f381"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255835 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.258887 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerStarted","Data":"4983d966fe225895264f26786e87f61b467c9409a1bd1deac4339cca6b1e1108"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.260803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" event={"ID":"d0ed825a-5a7b-454e-80f7-5cfa3d459032","Type":"ContainerStarted","Data":"7a6095c566ba193c683ad99a96c93380ccb46307a1a30611da0a364ff4f7a25c"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.262034 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.262090 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.265750 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" podStartSLOduration=121.265736657 podStartE2EDuration="2m1.265736657s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.263546982 +0000 UTC m=+146.392829594" watchObservedRunningTime="2026-01-27 18:08:11.265736657 +0000 UTC m=+146.395019269" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.269981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.280381 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.329808 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" podStartSLOduration=121.329774523 podStartE2EDuration="2m1.329774523s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.32835427 +0000 UTC m=+146.457636882" watchObservedRunningTime="2026-01-27 18:08:11.329774523 +0000 UTC m=+146.459057135" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.331230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" podStartSLOduration=120.331224285 podStartE2EDuration="2m0.331224285s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.303193826 +0000 UTC m=+146.432476438" watchObservedRunningTime="2026-01-27 18:08:11.331224285 +0000 UTC m=+146.460506897" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.349068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.349237 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.849198468 +0000 UTC m=+146.978481080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.349886 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.353772 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.853761373 +0000 UTC m=+146.983043975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.399830 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" podStartSLOduration=120.399801785 podStartE2EDuration="2m0.399801785s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.376743693 +0000 UTC m=+146.506026305" watchObservedRunningTime="2026-01-27 18:08:11.399801785 +0000 UTC m=+146.529084387" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.419088 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4tcrf" podStartSLOduration=7.419061226 podStartE2EDuration="7.419061226s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.401066703 +0000 UTC m=+146.530349315" watchObservedRunningTime="2026-01-27 18:08:11.419061226 +0000 UTC m=+146.548343838" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.449872 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" podStartSLOduration=121.449855557 podStartE2EDuration="2m1.449855557s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.448230349 +0000 UTC m=+146.577512961" watchObservedRunningTime="2026-01-27 18:08:11.449855557 +0000 UTC m=+146.579138169" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.452452 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.453158 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.953115324 +0000 UTC m=+147.082397936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.473950 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podStartSLOduration=120.47392961 podStartE2EDuration="2m0.47392961s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.472182908 +0000 UTC m=+146.601465520" watchObservedRunningTime="2026-01-27 18:08:11.47392961 +0000 UTC m=+146.603212232" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.503324 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" podStartSLOduration=121.503306339 podStartE2EDuration="2m1.503306339s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.501001321 +0000 UTC m=+146.630283943" watchObservedRunningTime="2026-01-27 18:08:11.503306339 +0000 UTC m=+146.632588951" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.531979 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" podStartSLOduration=121.531955957 podStartE2EDuration="2m1.531955957s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.53102941 +0000 UTC m=+146.660312022" watchObservedRunningTime="2026-01-27 18:08:11.531955957 +0000 UTC m=+146.661238569" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.555123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.555545 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.055526345 +0000 UTC m=+147.184808957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.558480 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" podStartSLOduration=120.558456702 podStartE2EDuration="2m0.558456702s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.557243806 +0000 UTC m=+146.686526418" watchObservedRunningTime="2026-01-27 18:08:11.558456702 +0000 UTC m=+146.687739314" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.644469 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podStartSLOduration=121.644442736 podStartE2EDuration="2m1.644442736s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.604926277 +0000 UTC m=+146.734208889" watchObservedRunningTime="2026-01-27 18:08:11.644442736 +0000 UTC m=+146.773725348" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.644814 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" podStartSLOduration=121.644809997 podStartE2EDuration="2m1.644809997s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.64289638 +0000 UTC m=+146.772178992" watchObservedRunningTime="2026-01-27 18:08:11.644809997 +0000 UTC m=+146.774092609" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.657104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.657366 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.157327178 +0000 UTC m=+147.286609790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.657537 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.658037 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.158002118 +0000 UTC m=+147.287284730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669596 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669678 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669765 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669810 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.672165 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" podStartSLOduration=121.672154167 podStartE2EDuration="2m1.672154167s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.671218309 +0000 UTC m=+146.800500921" watchObservedRunningTime="2026-01-27 18:08:11.672154167 +0000 UTC m=+146.801436779" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.674069 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.674130 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.675118 4907 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xld9m container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.675157 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podUID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.759053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.759346 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.259304956 +0000 UTC m=+147.388587568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.759438 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.759783 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.25976808 +0000 UTC m=+147.389050682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.854758 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:11 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:11 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:11 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.854884 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.860322 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.860654 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.360631936 +0000 UTC m=+147.489914548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.911568 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.962221 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.962670 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.462655796 +0000 UTC m=+147.591938408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.062948 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.063157 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.5631209 +0000 UTC m=+147.692403512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.064031 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.064451 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.564439489 +0000 UTC m=+147.693722091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.164742 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.165128 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.665109189 +0000 UTC m=+147.794391801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.265857 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.266240 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.766223872 +0000 UTC m=+147.895506484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.269871 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"31abee030aec864b0b7650934ff98ad10d37a1b2c79e40fe336210a1e258e2ff"} Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.271118 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.271176 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.367620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.367863 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.867827059 +0000 UTC m=+147.997109671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.368807 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.369985 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.869969073 +0000 UTC m=+147.999251685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.470349 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.470701 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.970678284 +0000 UTC m=+148.099960896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.470815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.471241 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.97123269 +0000 UTC m=+148.100515302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.572680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.572825 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.072791347 +0000 UTC m=+148.202073959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.573389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.573864 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.073842418 +0000 UTC m=+148.203125030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.674935 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.675359 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.175303791 +0000 UTC m=+148.304586413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.776699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.777247 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.277228108 +0000 UTC m=+148.406510720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.853134 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:12 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:12 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:12 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.853225 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.877591 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.877950 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.377907448 +0000 UTC m=+148.507190200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.979825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.980371 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.480354451 +0000 UTC m=+148.609637063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.081089 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.081453 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.581369791 +0000 UTC m=+148.710652403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.081929 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.082542 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.582527315 +0000 UTC m=+148.711809928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.183256 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.183390 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.683365141 +0000 UTC m=+148.812647753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.183788 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.184378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.68434937 +0000 UTC m=+148.813632192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.273750 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.273816 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284136 4907 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-78q6j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284198 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" podUID="38363947-4768-44b8-b3fe-f7b5b482da55" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284576 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.284775 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.784746441 +0000 UTC m=+148.914029043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284972 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.285334 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.785325899 +0000 UTC m=+148.914608511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.368006 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.369497 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.372344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.385782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.385975 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.885944457 +0000 UTC m=+149.015227069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.386415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.387542 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.887525704 +0000 UTC m=+149.016808316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.409791 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488317 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488811 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.489021 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.989002098 +0000 UTC m=+149.118284710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.531066 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.532050 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.535562 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.577611 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590187 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590284 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590302 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590323 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590438 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590501 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.590724 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.090700528 +0000 UTC m=+149.219983210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.597626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598340 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598405 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.676176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.686087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693499 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.694079 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.194058668 +0000 UTC m=+149.323341280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.694447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.694998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.749401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.766102 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.767263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.771147 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.776763 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.781460 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.796407 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.797051 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.797524 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.2975107 +0000 UTC m=+149.426793312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.856525 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.861722 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:13 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:13 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:13 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.861766 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898097 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898383 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.898518 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.398496489 +0000 UTC m=+149.527779101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.944535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.945702 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.963182 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.999861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.999916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000028 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000063 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000124 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000171 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.000915 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.500897441 +0000 UTC m=+149.630180053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.001020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.001331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.047472 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104363 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104668 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104708 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.105449 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.605424535 +0000 UTC m=+149.734707147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105502 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105542 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.106549 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.606541148 +0000 UTC m=+149.735823760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.143394 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.162290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.207236 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.207538 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.707517637 +0000 UTC m=+149.836800239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.302171 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.310198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.310530 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.810517366 +0000 UTC m=+149.939799978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.380515 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"4350a3f312577284841a8f2a177a6a61f1a418d239294a056a2d101d359c1912"} Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.414335 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.414645 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.914627247 +0000 UTC m=+150.043909859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.522281 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.522903 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.022888652 +0000 UTC m=+150.152171264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.636968 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.637288 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.137269288 +0000 UTC m=+150.266551900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.739926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.740605 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.240592516 +0000 UTC m=+150.369875128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.791391 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.841275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.841660 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.341639778 +0000 UTC m=+150.470922390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.862820 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:14 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:14 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:14 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.862890 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.945322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.947932 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.447917874 +0000 UTC m=+150.577200486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.958040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.052306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.052710 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.552687235 +0000 UTC m=+150.681969847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.059991 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.153960 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.154353 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.654334394 +0000 UTC m=+150.783616996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.174081 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.237538 4907 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.256917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.257779 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.757758075 +0000 UTC m=+150.887040687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.313188 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.314157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.322531 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.338898 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.358475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.358906 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.858886048 +0000 UTC m=+150.988168660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390318 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1f9660fac977eeb3fd0684c3b80326db6baf83e4a30ff5d1a5b7689f5055ecd3"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390377 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e5bdf425241c511c1277d134a95cf0283b70b6d80213a155057fb3a3199e42bf"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390655 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.391083 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"381af3184b48628759e0e418b748e32d55bc4e48955c79f0bca42f10d1b84973"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.393599 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f115eecb9745fb2732177f347a753c0aacf9bb77b615bc1b7a84858c390e9ff"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.393633 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"777607e0327b4a1fd82b38b0638fe07f446796e4a20a050acce461f335a96b4b"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.395361 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9c3eeb2bc4e6c3388483c1a27d99aa60ac152eab3a7f905736d0bd08c7b87a40"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.395393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6662102d52a1ba43e602baddaa718a19dc1e21baabef154413168f5ad25c85a1"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.397706 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"df5c19582e26f170aa0c1548da26d952491d3edd29cac3e74949a7d252495505"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399624 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399718 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"011be499d8b8d8d22772e72b71e952b3184b41de73c3cfac7cf3219b4b7d08b2"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403278 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerStarted","Data":"327477b6362453b7f241bd4005f967f63bfcd92d60574b597325042d23e6ed02"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403397 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.408008 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.409901 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.411864 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerStarted","Data":"d3c6770c98ff6f027f6fcb7e8ca70d2af0a26ac35823e4ed05e57a35a4dcfa76"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.422535 4907 generic.go:334] "Generic (PLEG): container finished" podID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerID="52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.422605 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerDied","Data":"52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459138 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459423 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459494 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.459668 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.959650571 +0000 UTC m=+151.088933173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560682 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560798 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560821 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.561109 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.061096794 +0000 UTC m=+151.190379406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.561659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.561910 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.591605 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.661783 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.662852 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.162830115 +0000 UTC m=+151.292112727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.711194 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.712480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.719314 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763921 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.764344 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.26432755 +0000 UTC m=+151.393610162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.820712 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.853722 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:15 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:15 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:15 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.853792 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.865521 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.865643 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.365622398 +0000 UTC m=+151.494905010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.866106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.885261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.894121 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.940985 4907 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T18:08:15.237593018Z","Handler":null,"Name":""} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.955990 4907 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.956048 4907 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.965760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.982055 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.982100 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.018714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.044299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.052679 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:16 crc kubenswrapper[4907]: W0127 18:08:16.064767 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee6d631_48d1_4137_9736_c028fb27e655.slice/crio-9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9 WatchSource:0}: Error finding container 9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9: Status 404 returned error can't find the container with id 9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.066336 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.072229 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.240707 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.252709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:16 crc kubenswrapper[4907]: W0127 18:08:16.315594 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf WatchSource:0}: Error finding container 6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf: Status 404 returned error can't find the container with id 6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.433290 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.439679 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" exitCode=0 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.439747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500479 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718" exitCode=0 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500630 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerStarted","Data":"9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.525056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"b7b18eda64aebb23e9943e310d6bef774c0c9293904973ce32213bb07072d47f"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.573326 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" podStartSLOduration=12.573284176 podStartE2EDuration="12.573284176s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:16.573275856 +0000 UTC m=+151.702558468" watchObservedRunningTime="2026-01-27 18:08:16.573284176 +0000 UTC m=+151.702566788" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633622 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633687 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633716 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633737 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.638800 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.640386 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.643169 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.643408 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.646761 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.656188 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.681053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.681102 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.690323 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.692419 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]log ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]etcd ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/max-in-flight-filter ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 18:08:16 crc kubenswrapper[4907]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/openshift.io-startinformers ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 18:08:16 crc kubenswrapper[4907]: livez check failed Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.692492 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.697239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.726905 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.728866 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.739071 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.740466 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.745966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:08:16 crc kubenswrapper[4907]: E0127 18:08:16.755363 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-conmon-41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782923 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782991 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783181 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783281 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.815651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.815904 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.819246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.822167 4907 patch_prober.go:28] interesting pod/console-f9d7485db-grwdr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.822223 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-grwdr" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.853290 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.858889 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:16 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.858959 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.871942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885949 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.886368 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.887636 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.944544 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.998717 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089303 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089378 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.098732 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4" (OuterVolumeSpecName: "kube-api-access-l6wm4") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "kube-api-access-l6wm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.099491 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.103645 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.115844 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.116710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: E0127 18:08:17.116988 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117000 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117120 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117981 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.135874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.157043 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192086 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192109 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192167 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192179 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192189 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295218 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.298951 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.299745 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.302629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.304702 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.306062 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.306267 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.312680 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.344908 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.396114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.396166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.441082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.447868 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498346 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498400 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498467 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.511487 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.516211 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9526ea_3ca9_4727_aadd_3103419511d9.slice/crio-a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a WatchSource:0}: Error finding container a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a: Status 404 returned error can't find the container with id a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.522346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerStarted","Data":"7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539861 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerStarted","Data":"c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539955 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerDied","Data":"048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549270 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549280 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.550355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.551845 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" exitCode=0 Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.552950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.569621 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" podStartSLOduration=127.569596388 podStartE2EDuration="2m7.569596388s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:17.565769585 +0000 UTC m=+152.695052217" watchObservedRunningTime="2026-01-27 18:08:17.569596388 +0000 UTC m=+152.698879000" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.590573 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.635639 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.683600 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.707319 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.719787 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e033ab3_25a2_4b59_80a5_a9af38d07e93.slice/crio-30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1 WatchSource:0}: Error finding container 30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1: Status 404 returned error can't find the container with id 30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1 Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.781623 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.865956 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:17 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:17 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:17 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.866082 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.979659 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.992309 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c7f489_c85f_47d4_9ef7_d0f9aba0cc19.slice/crio-b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522 WatchSource:0}: Error finding container b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522: Status 404 returned error can't find the container with id b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.098070 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.596415 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062" exitCode=0 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.596478 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.642647 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerStarted","Data":"c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.644403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerStarted","Data":"30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.646914 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb" exitCode=0 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.646995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.647125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.853549 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:18 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:18 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:18 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.853624 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.889252 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.411575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.798591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerStarted","Data":"2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5"} Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.804385 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerStarted","Data":"f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656"} Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.814315 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.814297334 podStartE2EDuration="3.814297334s" podCreationTimestamp="2026-01-27 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:19.813288314 +0000 UTC m=+154.942570926" watchObservedRunningTime="2026-01-27 18:08:19.814297334 +0000 UTC m=+154.943579946" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.832654 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.832634196 podStartE2EDuration="2.832634196s" podCreationTimestamp="2026-01-27 18:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:19.830526634 +0000 UTC m=+154.959809266" watchObservedRunningTime="2026-01-27 18:08:19.832634196 +0000 UTC m=+154.961916808" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.851170 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:19 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:19 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:19 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.851218 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.815869 4907 generic.go:334] "Generic (PLEG): container finished" podID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerID="2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5" exitCode=0 Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.815913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerDied","Data":"2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5"} Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.819468 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerID="f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656" exitCode=0 Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.819504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerDied","Data":"f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656"} Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.852378 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:20 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:20 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:20 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.852472 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.683469 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.694901 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.852530 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:21 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:21 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:21 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.852871 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.268497 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302109 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302305 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e033ab3-25a2-4b59-80a5-a9af38d07e93" (UID: "8e033ab3-25a2-4b59-80a5-a9af38d07e93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302510 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.309975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e033ab3-25a2-4b59-80a5-a9af38d07e93" (UID: "8e033ab3-25a2-4b59-80a5-a9af38d07e93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.351860 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.403291 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.504248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"3d614d08-52fd-42af-a4bd-b17d80303a0d\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.504355 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"3d614d08-52fd-42af-a4bd-b17d80303a0d\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.505348 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d614d08-52fd-42af-a4bd-b17d80303a0d" (UID: "3d614d08-52fd-42af-a4bd-b17d80303a0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.508776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d614d08-52fd-42af-a4bd-b17d80303a0d" (UID: "3d614d08-52fd-42af-a4bd-b17d80303a0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.605797 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.605829 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.848686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerDied","Data":"30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1"} Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.849052 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.848726 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerDied","Data":"c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6"} Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851219 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851415 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.852954 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:22 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:22 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:22 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.852991 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:23 crc kubenswrapper[4907]: I0127 18:08:23.852348 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:23 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:23 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:23 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:23 crc kubenswrapper[4907]: I0127 18:08:23.852409 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:24 crc kubenswrapper[4907]: I0127 18:08:24.852655 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:24 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:24 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:24 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:24 crc kubenswrapper[4907]: I0127 18:08:24.853181 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:25 crc kubenswrapper[4907]: I0127 18:08:25.854180 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:25 crc kubenswrapper[4907]: [+]has-synced ok Jan 27 18:08:25 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:25 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:25 crc kubenswrapper[4907]: I0127 18:08:25.854266 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.521083 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.521412 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634128 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634216 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634140 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634789 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.821468 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.824864 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.853450 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.855910 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.061004 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.061925 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" containerID="cri-o://ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" gracePeriod=30 Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.089918 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.090182 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" containerID="cri-o://5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" gracePeriod=30 Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.398846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.418047 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.589937 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.975968 4907 generic.go:334] "Generic (PLEG): container finished" podID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerID="5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" exitCode=0 Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.976131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerDied","Data":"5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c"} Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.982664 4907 generic.go:334] "Generic (PLEG): container finished" podID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerID="ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" exitCode=0 Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.982717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerDied","Data":"ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb"} Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.250524 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634165 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634223 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634317 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634404 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634476 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635057 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635231 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635395 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} pod="openshift-console/downloads-7954f5f757-h79fx" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635585 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" containerID="cri-o://4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541" gracePeriod=2 Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.706845 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.707244 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.978640 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.978724 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.661993 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.662541 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdt5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pnt7r_openshift-marketplace(2d1e3321-a7c6-4910-adec-31bf7b3c8f0a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.663810 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.642880 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.643138 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhc2f_openshift-marketplace(7c7f1204-674f-4d4e-a695-28b2d0956b32): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.644427 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" Jan 27 18:08:45 crc kubenswrapper[4907]: E0127 18:08:45.504779 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" Jan 27 18:08:45 crc kubenswrapper[4907]: E0127 18:08:45.504888 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" Jan 27 18:08:46 crc kubenswrapper[4907]: I0127 18:08:46.633887 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:46 crc kubenswrapper[4907]: I0127 18:08:46.634285 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.055393 4907 generic.go:334] "Generic (PLEG): container finished" podID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerID="4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541" exitCode=0 Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.055485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerDied","Data":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.631047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.706598 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.706697 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.978719 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.978843 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.646653 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.646844 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frltg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-klwtz_openshift-marketplace(dee6d631-48d1-4137-9736-c028fb27e655): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.648135 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.156086 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.215051 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.220518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.243353 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.243522 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mthtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cg67x_openshift-marketplace(7ee8faea-87ec-4620-b6a8-db398d35039a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.245766 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250034 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250211 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wsq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-glcgf_openshift-marketplace(ed699310-2f9f-414f-ad04-7778af36ddb7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250284 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250360 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dz68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b7l4d_openshift-marketplace(f317b8ef-4875-4f24-8926-8efd5826a51e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250463 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250793 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250835 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250845 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250856 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250863 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250875 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250882 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251009 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251021 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251033 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251046 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.251467 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.251480 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.252259 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.274787 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277100 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277155 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277198 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277215 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277276 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277324 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277450 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277478 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277507 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281444 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config" (OuterVolumeSpecName: "config") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281463 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config" (OuterVolumeSpecName: "config") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.282097 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca" (OuterVolumeSpecName: "client-ca") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.282225 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca" (OuterVolumeSpecName: "client-ca") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.285805 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7" (OuterVolumeSpecName: "kube-api-access-jzft7") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "kube-api-access-jzft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.286251 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697" (OuterVolumeSpecName: "kube-api-access-4n697") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "kube-api-access-4n697". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.287052 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.287106 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379002 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379247 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379260 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379271 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379280 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379292 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379306 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379315 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379326 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379334 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.380270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.380346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.383240 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.397885 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.576638 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087648 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerDied","Data":"ab8e4f87a2ffbe236cf7a8b9faa6044f734bcd783a8ccf483230babd8b2d0aab"} Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087707 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087723 4907 scope.go:117] "RemoveContainer" containerID="5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.091220 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerDied","Data":"53f1bb78246a95f04a0e3a59320d7de5b66a380634a406b2deaad462424ff23c"} Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.091418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.167332 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.169349 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.174829 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.177279 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.039959 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.040893 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.046624 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.046929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.047970 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048377 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048393 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048764 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.050929 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.054947 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.079785 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.080674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.085470 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.085847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.096044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105257 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105321 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105456 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.117899 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.207937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208282 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208306 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208339 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208383 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208405 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.209272 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210014 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210350 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210382 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.215253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.238138 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.238331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.355487 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.394979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.753947 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" path="/var/lib/kubelet/pods/248aff8a-60f5-4154-a7bb-2dd95e4b2555/volumes" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.754711 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" path="/var/lib/kubelet/pods/98f518f9-4f3f-45f1-80f4-b50d4eb03135/volumes" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.787274 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.292923 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.293081 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.293131 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.327116 4907 scope.go:117] "RemoveContainer" containerID="ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.363528 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.364084 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9khpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jhwph_openshift-marketplace(1f9526ea-3ca9-4727-aadd-3103419511d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.366156 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.550689 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.623047 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.884545 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.899192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:55 crc kubenswrapper[4907]: W0127 18:08:55.956611 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce1035d6_a135_49c3_8d47_48005ecfc2d7.slice/crio-6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a WatchSource:0}: Error finding container 6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a: Status 404 returned error can't find the container with id 6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.127448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"cf780a670ea7e26bae79f6bfd9a8b3fe16142cfe305261656badb4b95b14c50a"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.129017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerStarted","Data":"df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.131641 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerStarted","Data":"6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.136687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerStarted","Data":"c20e1125c69187397fd3d90a194c9f5e0105aa13b720cd7fbb8a8dbc1ba84399"} Jan 27 18:08:56 crc kubenswrapper[4907]: E0127 18:08:56.138452 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.521892 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.521983 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.639976 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.640058 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.220119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerStarted","Data":"24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.228800 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" containerID="cri-o://24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" gracePeriod=30 Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.229434 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.253349 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.257890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"6629b1b5c77c6cdbd075185f02e2e8dccc29f1ed5e33db884228f2ec0a4dd7c2"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.258953 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.259026 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.259056 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.281279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerStarted","Data":"4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.282635 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.305692 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" podStartSLOduration=24.305670255 podStartE2EDuration="24.305670255s" podCreationTimestamp="2026-01-27 18:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.262104976 +0000 UTC m=+192.391387588" watchObservedRunningTime="2026-01-27 18:08:57.305670255 +0000 UTC m=+192.434952867" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.308922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"bba4db02fed7b83ec087a05e847f09908fd62f4840c5fe753d719bb1bf8eea4b"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.323700 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.329775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerStarted","Data":"68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.379465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.379444919 podStartE2EDuration="4.379444919s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.377812871 +0000 UTC m=+192.507095483" watchObservedRunningTime="2026-01-27 18:08:57.379444919 +0000 UTC m=+192.508727531" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.339988 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"1c34b09cf0f7dbe39eb22af74758ea5e58fecc5910cfc23955e5c5c47e84b08b"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.342490 4907 generic.go:334] "Generic (PLEG): container finished" podID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerID="68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32" exitCode=0 Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.342579 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerDied","Data":"68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.345977 4907 generic.go:334] "Generic (PLEG): container finished" podID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerID="24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" exitCode=0 Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346033 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerDied","Data":"24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346492 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346548 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.354601 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n2z5k" podStartSLOduration=168.354579214 podStartE2EDuration="2m48.354579214s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:58.354060538 +0000 UTC m=+193.483343150" watchObservedRunningTime="2026-01-27 18:08:58.354579214 +0000 UTC m=+193.483861846" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.355242 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podStartSLOduration=5.355234203 podStartE2EDuration="5.355234203s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.408027965 +0000 UTC m=+192.537310577" watchObservedRunningTime="2026-01-27 18:08:58.355234203 +0000 UTC m=+193.484516835" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.424645 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456512 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:08:58 crc kubenswrapper[4907]: E0127 18:08:58.456756 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456767 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456863 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.457232 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.506798 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624627 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624719 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624767 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625372 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625459 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config" (OuterVolumeSpecName: "config") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.626085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.632498 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm" (OuterVolumeSpecName: "kube-api-access-mmxrm") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "kube-api-access-mmxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.645743 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726230 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726420 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726433 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726444 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726452 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.727481 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.727915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.730197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.750127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.769102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.354635 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9"} Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerDied","Data":"6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a"} Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357358 4907 scope.go:117] "RemoveContainer" containerID="24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357474 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.358386 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.358475 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.440868 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.450218 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.761352 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" path="/var/lib/kubelet/pods/ce1035d6-a135-49c3-8d47-48005ecfc2d7/volumes" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.277040 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.283248 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.300934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.364246 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9" exitCode=0 Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.364293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9"} Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454599 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454701 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.555940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556014 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556900 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.577952 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.606337 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.751945 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.862635 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.862734 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.863052 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" (UID: "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.866728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" (UID: "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.964722 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.964758 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.370973 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerDied","Data":"df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced"} Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.371049 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced" Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.371071 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:09:03 crc kubenswrapper[4907]: I0127 18:09:03.441938 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:03 crc kubenswrapper[4907]: I0127 18:09:03.599709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.392773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerStarted","Data":"c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.393995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerStarted","Data":"ef0720fe87e3f12e59dc7091d752de57214cbed73b14c8a59f3d4b6af6479d52"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.395821 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.398100 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646" exitCode=0 Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.398167 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.405950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerStarted","Data":"84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.408712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerStarted","Data":"af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.410276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.417076 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" exitCode=0 Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.417150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.422202 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.438538 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.438511712 podStartE2EDuration="5.438511712s" podCreationTimestamp="2026-01-27 18:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:05.429360144 +0000 UTC m=+200.558642766" watchObservedRunningTime="2026-01-27 18:09:05.438511712 +0000 UTC m=+200.567794364" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.516720 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" podStartSLOduration=12.51669122 podStartE2EDuration="12.51669122s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:05.511957652 +0000 UTC m=+200.641240294" watchObservedRunningTime="2026-01-27 18:09:05.51669122 +0000 UTC m=+200.645973852" Jan 27 18:09:06 crc kubenswrapper[4907]: I0127 18:09:06.653965 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.437232 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581"} Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.448917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.448981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.788342 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m4mfc" podStartSLOduration=3.090707191 podStartE2EDuration="50.788315212s" podCreationTimestamp="2026-01-27 18:08:17 +0000 UTC" firstStartedPulling="2026-01-27 18:08:18.650789713 +0000 UTC m=+153.780072325" lastFinishedPulling="2026-01-27 18:09:06.348397704 +0000 UTC m=+201.477680346" observedRunningTime="2026-01-27 18:09:07.457066026 +0000 UTC m=+202.586348708" watchObservedRunningTime="2026-01-27 18:09:07.788315212 +0000 UTC m=+202.917597854" Jan 27 18:09:09 crc kubenswrapper[4907]: I0127 18:09:09.416310 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m4mfc" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" probeResult="failure" output=< Jan 27 18:09:09 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:09:09 crc kubenswrapper[4907]: > Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.489573 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.505984 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35" exitCode=0 Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.506054 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.527222 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerStarted","Data":"51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.537125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.540300 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.543771 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8" exitCode=0 Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.543842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.547908 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.550183 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pnt7r" podStartSLOduration=3.662961028 podStartE2EDuration="1m4.550161393s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.413733842 +0000 UTC m=+150.543016454" lastFinishedPulling="2026-01-27 18:09:16.300934207 +0000 UTC m=+211.430216819" observedRunningTime="2026-01-27 18:09:17.546439194 +0000 UTC m=+212.675721806" watchObservedRunningTime="2026-01-27 18:09:17.550161393 +0000 UTC m=+212.679444005" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.550909 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.631459 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhc2f" podStartSLOduration=3.738040472 podStartE2EDuration="1m4.631435652s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.402921102 +0000 UTC m=+150.532203714" lastFinishedPulling="2026-01-27 18:09:16.296316252 +0000 UTC m=+211.425598894" observedRunningTime="2026-01-27 18:09:17.630020631 +0000 UTC m=+212.759303243" watchObservedRunningTime="2026-01-27 18:09:17.631435652 +0000 UTC m=+212.760718264" Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.559154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e"} Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.560979 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" exitCode=0 Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.561009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.563281 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" exitCode=0 Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.563319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} Jan 27 18:09:19 crc kubenswrapper[4907]: I0127 18:09:19.571266 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e" exitCode=0 Jan 27 18:09:19 crc kubenswrapper[4907]: I0127 18:09:19.571316 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e"} Jan 27 18:09:21 crc kubenswrapper[4907]: I0127 18:09:21.941494 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:21 crc kubenswrapper[4907]: I0127 18:09:21.942198 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m4mfc" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" containerID="cri-o://c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" gracePeriod=2 Jan 27 18:09:22 crc kubenswrapper[4907]: I0127 18:09:22.590161 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" exitCode=0 Jan 27 18:09:22 crc kubenswrapper[4907]: I0127 18:09:22.590247 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581"} Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.599101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerStarted","Data":"cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4"} Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.687095 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.687176 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.744400 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.876291 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.915881 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.915985 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.916022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.919225 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities" (OuterVolumeSpecName: "utilities") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.923331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x" (OuterVolumeSpecName: "kube-api-access-nth5x") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "kube-api-access-nth5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.017219 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.017267 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.059091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.118646 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.144934 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.145015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.186053 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611272 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522"} Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611839 4907 scope.go:117] "RemoveContainer" containerID="c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.638995 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cg67x" podStartSLOduration=4.9349759760000005 podStartE2EDuration="1m11.638976684s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.405429526 +0000 UTC m=+150.534712128" lastFinishedPulling="2026-01-27 18:09:22.109430224 +0000 UTC m=+217.238712836" observedRunningTime="2026-01-27 18:09:24.637574833 +0000 UTC m=+219.766857445" watchObservedRunningTime="2026-01-27 18:09:24.638976684 +0000 UTC m=+219.768259296" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.652161 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.661729 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.661847 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.690066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:25 crc kubenswrapper[4907]: I0127 18:09:25.756776 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" path="/var/lib/kubelet/pods/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19/volumes" Jan 27 18:09:25 crc kubenswrapper[4907]: I0127 18:09:25.999789 4907 scope.go:117] "RemoveContainer" containerID="69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.019806 4907 scope.go:117] "RemoveContainer" containerID="38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521257 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521333 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521391 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.522117 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.522197 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" gracePeriod=600 Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.945278 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.947288 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" containerID="cri-o://51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" gracePeriod=2 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.635906 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" exitCode=0 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.635972 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.637882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerStarted","Data":"ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.640773 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" exitCode=0 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.640801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.660513 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-klwtz" podStartSLOduration=4.170619449 podStartE2EDuration="1m13.660494455s" podCreationTimestamp="2026-01-27 18:08:15 +0000 UTC" firstStartedPulling="2026-01-27 18:08:16.510121066 +0000 UTC m=+151.639403678" lastFinishedPulling="2026-01-27 18:09:25.999996072 +0000 UTC m=+221.129278684" observedRunningTime="2026-01-27 18:09:28.659195687 +0000 UTC m=+223.788478309" watchObservedRunningTime="2026-01-27 18:09:28.660494455 +0000 UTC m=+223.789777067" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.154855 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.308913 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.309312 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.309339 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.310295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities" (OuterVolumeSpecName: "utilities") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.319116 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q" (OuterVolumeSpecName: "kube-api-access-gdt5q") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "kube-api-access-gdt5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.367292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419083 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419150 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419164 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651898 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"d3c6770c98ff6f027f6fcb7e8ca70d2af0a26ac35823e4ed05e57a35a4dcfa76"} Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651973 4907 scope.go:117] "RemoveContainer" containerID="51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651993 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.667845 4907 scope.go:117] "RemoveContainer" containerID="c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.687195 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.692439 4907 scope.go:117] "RemoveContainer" containerID="f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.697728 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:31 crc kubenswrapper[4907]: I0127 18:09:31.665035 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} Jan 27 18:09:31 crc kubenswrapper[4907]: I0127 18:09:31.758800 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" path="/var/lib/kubelet/pods/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a/volumes" Jan 27 18:09:32 crc kubenswrapper[4907]: I0127 18:09:32.672628 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027444 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b7l4d" podStartSLOduration=6.388819767 podStartE2EDuration="1m20.027421306s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:16.483486088 +0000 UTC m=+151.612768700" lastFinishedPulling="2026-01-27 18:09:30.122087627 +0000 UTC m=+225.251370239" observedRunningTime="2026-01-27 18:09:32.719799002 +0000 UTC m=+227.849081614" watchObservedRunningTime="2026-01-27 18:09:33.027421306 +0000 UTC m=+228.156703918" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027673 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027899 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" containerID="cri-o://4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" gracePeriod=30 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.118789 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.119294 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" containerID="cri-o://af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" gracePeriod=30 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.356681 4907 patch_prober.go:28] interesting pod/controller-manager-68686db9cf-mskkh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.357119 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.680110 4907 generic.go:334] "Generic (PLEG): container finished" podID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerID="4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" exitCode=0 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.680179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerDied","Data":"4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.682523 4907 generic.go:334] "Generic (PLEG): container finished" podID="090a67dd-469f-44de-9760-bb58338594d7" containerID="af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" exitCode=0 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.682773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerDied","Data":"af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.857756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.858250 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.907452 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.010643 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.076993 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077119 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077187 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.078380 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.078450 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config" (OuterVolumeSpecName: "config") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.084224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb" (OuterVolumeSpecName: "kube-api-access-pnlnb") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "kube-api-access-pnlnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.091168 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179057 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179373 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179382 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179392 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179403 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.295064 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.303029 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.303077 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.349321 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381485 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381583 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.382568 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.382584 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config" (OuterVolumeSpecName: "config") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.387710 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz" (OuterVolumeSpecName: "kube-api-access-zjktz") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "kube-api-access-zjktz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.387790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482584 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482624 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482638 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482651 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656105 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656454 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656476 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656497 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656512 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656532 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656544 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656597 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656615 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656658 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656672 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656688 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656701 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656725 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656737 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656754 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656766 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656783 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656794 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656972 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656997 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657014 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657029 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657048 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657611 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.662678 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.663846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.676799 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.678316 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685144 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685205 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.692922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerDied","Data":"c20e1125c69187397fd3d90a194c9f5e0105aa13b720cd7fbb8a8dbc1ba84399"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.693257 4907 scope.go:117] "RemoveContainer" containerID="4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.693526 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.699806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerDied","Data":"ef0720fe87e3f12e59dc7091d752de57214cbed73b14c8a59f3d4b6af6479d52"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.699904 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.707293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.715784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.725362 4907 scope.go:117] "RemoveContainer" containerID="af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.744718 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glcgf" podStartSLOduration=3.871855473 podStartE2EDuration="1m19.744696011s" podCreationTimestamp="2026-01-27 18:08:15 +0000 UTC" firstStartedPulling="2026-01-27 18:08:17.68214653 +0000 UTC m=+152.811429142" lastFinishedPulling="2026-01-27 18:09:33.554987068 +0000 UTC m=+228.684269680" observedRunningTime="2026-01-27 18:09:34.738971733 +0000 UTC m=+229.868254345" watchObservedRunningTime="2026-01-27 18:09:34.744696011 +0000 UTC m=+229.873978633" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.758388 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jhwph" podStartSLOduration=3.830156403 podStartE2EDuration="1m18.758364161s" podCreationTimestamp="2026-01-27 18:08:16 +0000 UTC" firstStartedPulling="2026-01-27 18:08:18.620411474 +0000 UTC m=+153.749694086" lastFinishedPulling="2026-01-27 18:09:33.548619232 +0000 UTC m=+228.677901844" observedRunningTime="2026-01-27 18:09:34.758218237 +0000 UTC m=+229.887500849" watchObservedRunningTime="2026-01-27 18:09:34.758364161 +0000 UTC m=+229.887646773" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.761138 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.776185 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.785499 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786252 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786290 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786419 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786501 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.788014 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.788287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.791108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.791159 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.792720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.796177 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.806300 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.810749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.978119 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.988148 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.261807 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.389915 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:35 crc kubenswrapper[4907]: W0127 18:09:35.399766 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09e10c2d_9dea_4d6c_9d36_feb0fdd0df13.slice/crio-0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f WatchSource:0}: Error finding container 0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f: Status 404 returned error can't find the container with id 0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724450 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerStarted","Data":"2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724972 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724994 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerStarted","Data":"e14ea8d0f79988b247be75c0c550ba68530c65e3005c05e315cd0f64e6973a7d"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.726712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerStarted","Data":"d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.726750 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerStarted","Data":"0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.727825 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.730084 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.753991 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" podStartSLOduration=2.753974983 podStartE2EDuration="2.753974983s" podCreationTimestamp="2026-01-27 18:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:35.753671134 +0000 UTC m=+230.882953756" watchObservedRunningTime="2026-01-27 18:09:35.753974983 +0000 UTC m=+230.883257585" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.760419 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" path="/var/lib/kubelet/pods/02a0c38d-5a4e-4189-86b8-6a42930553a2/volumes" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.761202 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090a67dd-469f-44de-9760-bb58338594d7" path="/var/lib/kubelet/pods/090a67dd-469f-44de-9760-bb58338594d7/volumes" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.821708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.821767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.872120 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.893107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" podStartSLOduration=2.893088156 podStartE2EDuration="2.893088156s" podCreationTimestamp="2026-01-27 18:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:35.852326082 +0000 UTC m=+230.981608694" watchObservedRunningTime="2026-01-27 18:09:35.893088156 +0000 UTC m=+231.022370768" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.045436 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.045575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.096310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.132194 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.292333 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.774474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:37 crc kubenswrapper[4907]: I0127 18:09:37.157684 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:37 crc kubenswrapper[4907]: I0127 18:09:37.157744 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:38 crc kubenswrapper[4907]: I0127 18:09:38.214988 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:09:38 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:09:38 crc kubenswrapper[4907]: > Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.326229 4907 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.328956 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.330200 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348503 4907 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348974 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348995 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348997 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.349049 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.349005 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350424 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350819 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350841 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350866 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350878 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350908 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350922 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351148 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351163 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351174 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351192 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351204 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351372 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351389 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351451 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351469 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351483 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351677 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351693 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351852 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.385038 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515467 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515579 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516177 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516273 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516340 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617318 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617420 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617424 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617503 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617519 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617607 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617528 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617918 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.676941 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: W0127 18:09:42.706657 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6 WatchSource:0}: Error finding container ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6: Status 404 returned error can't find the container with id ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6 Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.710844 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.782487 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.784802 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785519 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785565 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785575 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785584 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" exitCode=2 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785608 4907 scope.go:117] "RemoveContainer" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.787591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6"} Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.789380 4907 generic.go:334] "Generic (PLEG): container finished" podID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerID="84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.789479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerDied","Data":"84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3"} Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.790314 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.790655 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.791131 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.799580 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232"} Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.800439 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.801913 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.802687 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.179836 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.180738 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.181422 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342687 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342769 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342804 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342861 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock" (OuterVolumeSpecName: "var-lock") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.343754 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.343816 4907 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.344438 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.345195 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.345828 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.346284 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.352461 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.445475 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.814313 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.815416 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" exitCode=0 Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerDied","Data":"c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce"} Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818882 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818845 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.834506 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.834953 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.835310 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.065713 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.065958 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066202 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066405 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066642 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.066672 4907 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066865 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.260122 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.261882 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.262602 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263087 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263518 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263954 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.267432 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459301 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459421 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459439 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459487 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459600 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459926 4907 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459951 4907 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459970 4907 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.668020 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.753517 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.754533 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.755060 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.755355 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.764289 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.841684 4907 scope.go:117] "RemoveContainer" containerID="5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.841887 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.843547 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844089 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844456 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844864 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.847610 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.848066 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.848730 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.849883 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.863001 4907 scope.go:117] "RemoveContainer" containerID="46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.897925 4907 scope.go:117] "RemoveContainer" containerID="992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.914884 4907 scope.go:117] "RemoveContainer" containerID="aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.931467 4907 scope.go:117] "RemoveContainer" containerID="e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.951618 4907 scope.go:117] "RemoveContainer" containerID="20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728" Jan 27 18:09:46 crc kubenswrapper[4907]: E0127 18:09:46.005953 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.082756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.083761 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084175 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084648 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084976 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.085225 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: E0127 18:09:46.468631 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.204981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.205620 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.206680 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207185 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207607 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207991 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.208316 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.239781 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.240632 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241029 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241489 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241911 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.242196 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.242611 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:48 crc kubenswrapper[4907]: E0127 18:09:48.070008 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 27 18:09:51 crc kubenswrapper[4907]: E0127 18:09:51.270704 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="6.4s" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.747979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.752426 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.753227 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.754113 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.754844 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.763659 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.764583 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.765254 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.765806 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.766399 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.766989 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.774179 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.774234 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:55 crc kubenswrapper[4907]: E0127 18:09:55.774860 4907 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.775961 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: W0127 18:09:55.813880 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a WatchSource:0}: Error finding container eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a: Status 404 returned error can't find the container with id eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a Jan 27 18:09:56 crc kubenswrapper[4907]: E0127 18:09:56.008067 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493834 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493902 4907 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" exitCode=1 Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493990 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.494648 4907 scope.go:117] "RemoveContainer" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.494966 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.495273 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.495704 4907 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.496092 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.496849 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497410 4907 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4520c50f3526756fe670efd97cec6f84614e2659a6a8da7357dfa1fdf34161f4" exitCode=0 Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497489 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4520c50f3526756fe670efd97cec6f84614e2659a6a8da7357dfa1fdf34161f4"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497506 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497930 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497954 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:56 crc kubenswrapper[4907]: E0127 18:09:56.498275 4907 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.498608 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.499138 4907 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.499708 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.500305 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.501174 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.502013 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.516214 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.516327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbfabf6bdb370d8ddd4e8c144d8680688298a9eb3d89e99d995d9e8dbdbcdb98"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a7c29a98be59fbfc6081c6126c1b422e6d10a46006b1798155b83b6f83ad77f5"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0bcc01cfb7779933ae708a32a8992b37eaaead0a4a3626177097b65bbf6f4c1b"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.528709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a28f657812abf772ebda7da819e5f32dac72bfcedc3e01636e6799a57bd0e649"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529073 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8db4893b9a3883d33aaad0c6486c434baac421647622d5d28810bed69c579c31"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529306 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529319 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529516 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.776735 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.777285 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.778081 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.785755 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.328527 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" containerID="cri-o://764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" gracePeriod=15 Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.547502 4907 generic.go:334] "Generic (PLEG): container finished" podID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerID="764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" exitCode=0 Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.547637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerDied","Data":"764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc"} Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.886756 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063190 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063258 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063289 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063313 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063353 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063476 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063508 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063535 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063592 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063628 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063650 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063917 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.064536 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.065041 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.066156 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.068483 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.070720 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.071164 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.072273 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.072692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs" (OuterVolumeSpecName: "kube-api-access-vdmhs") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "kube-api-access-vdmhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.074954 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.075357 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077987 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165165 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165220 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165244 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165267 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165419 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165442 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165461 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165480 4907 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165504 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165531 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165628 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165654 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165682 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165710 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerDied","Data":"4a8097cce43ecee42c97c1d9ab5869697b268e0b34ef8036d5f9d6948ff49dc9"} Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557316 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557346 4907 scope.go:117] "RemoveContainer" containerID="764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.541265 4907 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.564545 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.564586 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.569043 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:04 crc kubenswrapper[4907]: I0127 18:10:04.572051 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:04 crc kubenswrapper[4907]: I0127 18:10:04.572104 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:05 crc kubenswrapper[4907]: I0127 18:10:05.768683 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ac3d38ac-9819-47d6-9772-28743485f643" Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267085 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267513 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267582 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:13 crc kubenswrapper[4907]: I0127 18:10:13.301538 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.164022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.357435 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.630929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.674998 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.059695 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.073902 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.086103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.489600 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.641322 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.820999 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.903121 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.149379 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.154936 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.215977 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.266961 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.267042 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.302923 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.328391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.444872 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.446246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.465082 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.496921 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.527912 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.535712 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.583333 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.657128 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.690103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.843661 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.962290 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.967048 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.081465 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.117638 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.259149 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.259595 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.325256 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.364030 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.373419 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.405146 4907 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.422987 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.426779 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.552814 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.582621 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.820372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.821713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.950725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.959364 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.075548 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.148594 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.320312 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.386596 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.408602 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.453073 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.549831 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.581057 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.591000 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.616757 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.662338 4907 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.665143 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.669534 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.698972 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.699457 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.711110 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.740360 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.925103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.000791 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.019091 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.064791 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.086484 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.137097 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.341980 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.342759 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.393207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.464654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.497949 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.526476 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.527396 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.617207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.769658 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.809893 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.860681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.896753 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.005757 4907 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.014128 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.037504 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.052709 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.054309 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.055138 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.074974 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.096701 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.101683 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.122913 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.170585 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.178965 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.196008 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.201834 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.545175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.597525 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.643370 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.676792 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.678375 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.019384 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.114236 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.169222 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.227120 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.239454 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.326978 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.357583 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.474497 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.606426 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.622969 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.635128 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.647227 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.718480 4907 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.744577 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.759987 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.802281 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.925010 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.950871 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.978890 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.015618 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.119750 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.120435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.120807 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.136137 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.139397 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.145344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.179220 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.206063 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.267548 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.351321 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.690700 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.770687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.800060 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.981720 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.077034 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.247577 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.319390 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.376774 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.397177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.402073 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.412384 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.431477 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.507270 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.592075 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.622246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.659343 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.680605 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.767434 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.903792 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.907122 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.033637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.034087 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.129353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.147173 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.201869 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.275821 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.275887 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.337997 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.372980 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.678245 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.778178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.801271 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.839809 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.866515 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.004695 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.036697 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.053542 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.136306 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.180569 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.209639 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.220899 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.300196 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.357864 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.639270 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.661699 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.704312 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.775755 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.935658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.990884 4907 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.009457 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.084940 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267197 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267307 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267385 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.268871 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.269091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f" gracePeriod=30 Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.292712 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.302262 4907 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.306754 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.306726951 podStartE2EDuration="44.306726951s" podCreationTimestamp="2026-01-27 18:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:03.401674505 +0000 UTC m=+258.530957117" watchObservedRunningTime="2026-01-27 18:10:26.306726951 +0000 UTC m=+281.436009613" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311363 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311445 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:10:26 crc kubenswrapper[4907]: E0127 18:10:26.311795 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311832 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: E0127 18:10:26.311876 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311895 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311975 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312005 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312100 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312139 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312953 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.316313 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.316983 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.320654 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.321681 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.324317 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.324352 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326424 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326532 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326633 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326664 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327156 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327395 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327599 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.336738 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.342957 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.345912 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.358194 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.359937 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.359917841 podStartE2EDuration="23.359917841s" podCreationTimestamp="2026-01-27 18:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:26.355997545 +0000 UTC m=+281.485280177" watchObservedRunningTime="2026-01-27 18:10:26.359917841 +0000 UTC m=+281.489200463" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.361535 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449533 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449699 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449747 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450150 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450236 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450290 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450327 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450379 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450514 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.527385 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551511 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551607 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551730 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551816 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.552017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.552602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.553778 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.554230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.555033 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.555692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.562750 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.562806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.564320 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.565066 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.566732 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.567176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.567472 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.569130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.575351 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.580437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.598796 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.639642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.676231 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.696809 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.829225 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.894139 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.982215 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.027637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.037356 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.062133 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.137080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.210226 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.321430 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.390714 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.400491 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.418983 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.483413 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b"] Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.487226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.505203 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.635974 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.655099 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.757349 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.764449 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" path="/var/lib/kubelet/pods/df82c5b4-85d6-4b74-85f5-46d598058d2d/volumes" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.788022 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.804189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.871079 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.876268 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.930142 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b"] Jan 27 18:10:27 crc kubenswrapper[4907]: W0127 18:10:27.938196 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a8fcf5_2457_47d4_9f00_6aad27a2cc1f.slice/crio-5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac WatchSource:0}: Error finding container 5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac: Status 404 returned error can't find the container with id 5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.973105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.197089 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.200938 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.293994 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.299305 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.357191 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.412737 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.616226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.714493 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.720813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.720880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac"} Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.721208 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.728744 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.748444 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podStartSLOduration=52.748423789 podStartE2EDuration="52.748423789s" podCreationTimestamp="2026-01-27 18:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:28.744461852 +0000 UTC m=+283.873744464" watchObservedRunningTime="2026-01-27 18:10:28.748423789 +0000 UTC m=+283.877706401" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.758279 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.848769 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.038685 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.041894 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.242778 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.251257 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.290835 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.413015 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.495796 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.516542 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.658034 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.910095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.949439 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.989426 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.263913 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.442946 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.567150 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:10:37 crc kubenswrapper[4907]: I0127 18:10:37.421861 4907 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:37 crc kubenswrapper[4907]: I0127 18:10:37.422660 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" gracePeriod=5 Jan 27 18:10:42 crc kubenswrapper[4907]: I0127 18:10:42.822757 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:42 crc kubenswrapper[4907]: I0127 18:10:42.823534 4907 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" exitCode=137 Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.026753 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.027238 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096845 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096925 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096963 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096982 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097025 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097069 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097282 4907 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097307 4907 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097323 4907 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.112085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.197941 4907 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.197975 4907 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.760129 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.760430 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.778924 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.778978 4907 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bda6022-73ad-4f22-80f9-94f18a1c9d59" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.785958 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.786012 4907 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bda6022-73ad-4f22-80f9-94f18a1c9d59" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838008 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838124 4907 scope.go:117] "RemoveContainer" containerID="53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838272 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.850663 4907 generic.go:334] "Generic (PLEG): container finished" podID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" exitCode=0 Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.850780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.852207 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.490501 4907 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.861466 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.862475 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.865535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.938475 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942201 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942269 4907 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f" exitCode=137 Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942371 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8beceedec1766937d7e161638b23c21838a345d08e38ed80357dfb2e4490308a"} Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942400 4907 scope.go:117] "RemoveContainer" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" Jan 27 18:10:57 crc kubenswrapper[4907]: I0127 18:10:57.949784 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 18:11:00 crc kubenswrapper[4907]: I0127 18:11:00.777935 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:02 crc kubenswrapper[4907]: I0127 18:11:02.882034 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:02 crc kubenswrapper[4907]: I0127 18:11:02.882708 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" containerID="cri-o://48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" gracePeriod=2 Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.312818 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.496903 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.496997 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.497117 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.498539 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities" (OuterVolumeSpecName: "utilities") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.506166 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6" (OuterVolumeSpecName: "kube-api-access-8wsq6") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "kube-api-access-8wsq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.522304 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598917 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598970 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598986 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.990895 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" exitCode=0 Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.990963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf"} Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991027 4907 scope.go:117] "RemoveContainer" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991060 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.020020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.021718 4907 scope.go:117] "RemoveContainer" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.027331 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.043946 4907 scope.go:117] "RemoveContainer" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.069931 4907 scope.go:117] "RemoveContainer" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.070661 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": container with ID starting with 48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303 not found: ID does not exist" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.070782 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} err="failed to get container status \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": rpc error: code = NotFound desc = could not find container \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": container with ID starting with 48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303 not found: ID does not exist" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.070833 4907 scope.go:117] "RemoveContainer" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.071288 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": container with ID starting with 8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185 not found: ID does not exist" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071331 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} err="failed to get container status \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": rpc error: code = NotFound desc = could not find container \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": container with ID starting with 8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185 not found: ID does not exist" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071359 4907 scope.go:117] "RemoveContainer" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.071817 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": container with ID starting with 41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13 not found: ID does not exist" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071856 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13"} err="failed to get container status \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": rpc error: code = NotFound desc = could not find container \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": container with ID starting with 41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13 not found: ID does not exist" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.085820 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.087691 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" containerID="cri-o://99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" gracePeriod=2 Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.498219 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630681 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630772 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630838 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.632144 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities" (OuterVolumeSpecName: "utilities") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.640888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68" (OuterVolumeSpecName: "kube-api-access-9dz68") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "kube-api-access-9dz68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.680413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732195 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732267 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732296 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.757524 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" path="/var/lib/kubelet/pods/ed699310-2f9f-414f-ad04-7778af36ddb7/volumes" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007049 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" exitCode=0 Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007107 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007126 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007609 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"381af3184b48628759e0e418b748e32d55bc4e48955c79f0bca42f10d1b84973"} Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007631 4907 scope.go:117] "RemoveContainer" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.024597 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.027081 4907 scope.go:117] "RemoveContainer" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.028154 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.052408 4907 scope.go:117] "RemoveContainer" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.065775 4907 scope.go:117] "RemoveContainer" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066101 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": container with ID starting with 99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392 not found: ID does not exist" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066153 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} err="failed to get container status \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": rpc error: code = NotFound desc = could not find container \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": container with ID starting with 99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392 not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066186 4907 scope.go:117] "RemoveContainer" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066479 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": container with ID starting with f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685 not found: ID does not exist" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066511 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} err="failed to get container status \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": rpc error: code = NotFound desc = could not find container \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": container with ID starting with f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685 not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066535 4907 scope.go:117] "RemoveContainer" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066830 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": container with ID starting with 98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d not found: ID does not exist" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066853 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d"} err="failed to get container status \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": rpc error: code = NotFound desc = could not find container \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": container with ID starting with 98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.267028 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.271039 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:07 crc kubenswrapper[4907]: I0127 18:11:07.018248 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:07 crc kubenswrapper[4907]: I0127 18:11:07.760000 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" path="/var/lib/kubelet/pods/f317b8ef-4875-4f24-8926-8efd5826a51e/volumes" Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.856362 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.857364 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" containerID="cri-o://2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" gracePeriod=30 Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.863301 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.863524 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" containerID="cri-o://d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" gracePeriod=30 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.115636 4907 generic.go:334] "Generic (PLEG): container finished" podID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerID="2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" exitCode=0 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.117893 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerDied","Data":"2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06"} Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.149218 4907 generic.go:334] "Generic (PLEG): container finished" podID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerID="d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" exitCode=0 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.149279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerDied","Data":"d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de"} Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.277070 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.387717 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404230 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404304 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404354 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404385 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405297 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405424 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config" (OuterVolumeSpecName: "config") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.412007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb" (OuterVolumeSpecName: "kube-api-access-9r5tb") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "kube-api-access-9r5tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.416204 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506047 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506118 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506150 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506426 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506442 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506453 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506465 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506476 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.507354 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca" (OuterVolumeSpecName: "client-ca") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.507520 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config" (OuterVolumeSpecName: "config") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.510935 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb" (OuterVolumeSpecName: "kube-api-access-qlgrb") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "kube-api-access-qlgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.510975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607847 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607894 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607908 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607922 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.156485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerDied","Data":"0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f"} Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.156564 4907 scope.go:117] "RemoveContainer" containerID="d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158246 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerDied","Data":"e14ea8d0f79988b247be75c0c550ba68530c65e3005c05e315cd0f64e6973a7d"} Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158301 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158546 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.179630 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.180051 4907 scope.go:117] "RemoveContainer" containerID="2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.182765 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.192905 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.197304 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.777385 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778134 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778157 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778174 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778184 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778202 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778211 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778225 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778233 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778248 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778256 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778270 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778278 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778288 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778296 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778308 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778329 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778337 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778451 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778469 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778502 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778521 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778532 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.779017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.783201 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784567 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784832 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784848 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.785105 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.785366 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.786893 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.787666 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.790348 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792344 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792397 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792579 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792701 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792473 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.795226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.799743 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.833673 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928402 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928430 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928500 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928541 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.030551 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.030986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.031014 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032243 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032626 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033477 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033648 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033658 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.034755 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.039139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.044395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.047064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.048754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.102947 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.111173 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.323128 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.435949 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.756734 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" path="/var/lib/kubelet/pods/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13/volumes" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.758083 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" path="/var/lib/kubelet/pods/68c8acc2-637c-4006-848e-bed0c1ea77fc/volumes" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175234 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" event={"ID":"4b0a63e6-0f9c-42b7-8006-fbd93909482e","Type":"ContainerStarted","Data":"5bfb8f59620c18f94e4f2d606be33ed7a3f153e53b15c3e01d3af035f4226fda"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175295 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" event={"ID":"4b0a63e6-0f9c-42b7-8006-fbd93909482e","Type":"ContainerStarted","Data":"235dbd8cb9b8deef26f39c83aed756e442aa6ab8bf9d6a9b5d7614669590af61"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175448 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.176983 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerStarted","Data":"37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.177030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerStarted","Data":"18f6e5e3b1ee0f3ca11561596a6ba9391fff07439b9fa26c916078f6a5a21c7a"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.177241 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.180048 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.180861 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.198325 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podStartSLOduration=4.19830993 podStartE2EDuration="4.19830993s" podCreationTimestamp="2026-01-27 18:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:20.194285861 +0000 UTC m=+335.323568473" watchObservedRunningTime="2026-01-27 18:11:20.19830993 +0000 UTC m=+335.327592542" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.247025 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" podStartSLOduration=4.247005525 podStartE2EDuration="4.247005525s" podCreationTimestamp="2026-01-27 18:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:20.242108861 +0000 UTC m=+335.371391473" watchObservedRunningTime="2026-01-27 18:11:20.247005525 +0000 UTC m=+335.376288137" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.016332 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.017360 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" containerID="cri-o://37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.379722 4907 generic.go:334] "Generic (PLEG): container finished" podID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerID="37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" exitCode=0 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.379799 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerDied","Data":"37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d"} Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.459993 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544136 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544250 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544304 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545184 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545369 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config" (OuterVolumeSpecName: "config") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545568 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545810 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545829 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545843 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.550136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.550250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx" (OuterVolumeSpecName: "kube-api-access-xmwcx") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "kube-api-access-xmwcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.646832 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.646882 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.921824 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.922400 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" containerID="cri-o://cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.932237 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.932469 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" containerID="cri-o://f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.947923 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.959348 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.959623 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" containerID="cri-o://ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.975646 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:53 crc kubenswrapper[4907]: E0127 18:11:53.976014 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976039 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976194 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976678 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.983498 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.989748 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.990055 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" containerID="cri-o://c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" gracePeriod=30 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053736 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.156754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.163707 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.172587 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.302202 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.309998 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.358828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.359201 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.359241 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.360216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities" (OuterVolumeSpecName: "utilities") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.364818 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg" (OuterVolumeSpecName: "kube-api-access-s97lg") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "kube-api-access-s97lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.386528 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.386606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerDied","Data":"18f6e5e3b1ee0f3ca11561596a6ba9391fff07439b9fa26c916078f6a5a21c7a"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387895 4907 scope.go:117] "RemoveContainer" containerID="37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387998 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397688 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397797 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"011be499d8b8d8d22772e72b71e952b3184b41de73c3cfac7cf3219b4b7d08b2"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397939 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406385 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406474 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406510 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406527 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.408312 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.428932 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429109 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" containerID="cri-o://3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" gracePeriod=30 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429409 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.443278 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.453962 4907 scope.go:117] "RemoveContainer" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460196 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460311 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460391 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460626 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460637 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.466723 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg" (OuterVolumeSpecName: "kube-api-access-frltg") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "kube-api-access-frltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.467775 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities" (OuterVolumeSpecName: "utilities") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.489897 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.522965 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.523052 4907 scope.go:117] "RemoveContainer" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.536035 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.551518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561534 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561729 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561798 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561826 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562036 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562049 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562059 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562070 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.563905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities" (OuterVolumeSpecName: "utilities") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.573507 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg" (OuterVolumeSpecName: "kube-api-access-9khpg") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "kube-api-access-9khpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.574300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities" (OuterVolumeSpecName: "utilities") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.577985 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.612300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb" (OuterVolumeSpecName: "kube-api-access-mthtb") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "kube-api-access-mthtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.622104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.641617 4907 scope.go:117] "RemoveContainer" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: W0127 18:11:54.647009 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5564598e_ff23_4f9e_b3de_64e127e94da6.slice/crio-c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf WatchSource:0}: Error finding container c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf: Status 404 returned error can't find the container with id c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663623 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663660 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663704 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663714 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663723 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.679162 4907 scope.go:117] "RemoveContainer" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.682491 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": container with ID starting with f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d not found: ID does not exist" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.682535 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} err="failed to get container status \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": rpc error: code = NotFound desc = could not find container \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": container with ID starting with f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.682578 4907 scope.go:117] "RemoveContainer" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.683009 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": container with ID starting with 82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5 not found: ID does not exist" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.683042 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} err="failed to get container status \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": rpc error: code = NotFound desc = could not find container \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": container with ID starting with 82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5 not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.683066 4907 scope.go:117] "RemoveContainer" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.683917 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": container with ID starting with 28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2 not found: ID does not exist" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.684005 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2"} err="failed to get container status \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": rpc error: code = NotFound desc = could not find container \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": container with ID starting with 28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2 not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.740283 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.744937 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.751007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.766301 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.807794 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808085 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808106 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808123 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808132 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808143 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808152 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808171 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808373 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808385 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808393 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808408 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808419 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808431 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808439 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808452 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808465 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808479 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808488 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808504 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808519 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808527 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808546 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808721 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808732 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808739 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808751 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.809521 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.812819 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.812883 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.813136 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.813618 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.814888 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.815708 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.819214 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.819809 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.839664 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968604 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968648 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969135 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.973540 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.976262 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs" (OuterVolumeSpecName: "kube-api-access-f5wcs") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "kube-api-access-f5wcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070625 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070704 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070778 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070914 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070937 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070956 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.071872 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.072057 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.072807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.075125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.088651 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.138082 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.337745 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:55 crc kubenswrapper[4907]: W0127 18:11:55.345519 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48e5b57d_d01a_441e_beac_ef5e5d74dbc1.slice/crio-5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261 WatchSource:0}: Error finding container 5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261: Status 404 returned error can't find the container with id 5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261 Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.439406 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" event={"ID":"5564598e-ff23-4f9e-b3de-64e127e94da6","Type":"ContainerStarted","Data":"8def76b22114f6c4d4e31249c2b3f5500d827d1bd3f90a2f15e0e6e4587d70e2"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.439452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" event={"ID":"5564598e-ff23-4f9e-b3de-64e127e94da6","Type":"ContainerStarted","Data":"c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.440606 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442848 4907 generic.go:334] "Generic (PLEG): container finished" podID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" exitCode=0 Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442889 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"751f6790eddcfff181547cb7090e8c80fd9fdf4c4aa3c45b341c6ab12bb2cee7"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442923 4907 scope.go:117] "RemoveContainer" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442978 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.445973 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.452254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"327477b6362453b7f241bd4005f967f63bfcd92d60574b597325042d23e6ed02"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.452367 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.454227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" event={"ID":"48e5b57d-d01a-441e-beac-ef5e5d74dbc1","Type":"ContainerStarted","Data":"5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458169 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458174 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.465190 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.470587 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podStartSLOduration=2.470547217 podStartE2EDuration="2.470547217s" podCreationTimestamp="2026-01-27 18:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:55.463854749 +0000 UTC m=+370.593137361" watchObservedRunningTime="2026-01-27 18:11:55.470547217 +0000 UTC m=+370.599829859" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.513465 4907 scope.go:117] "RemoveContainer" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: E0127 18:11:55.519649 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": container with ID starting with 3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163 not found: ID does not exist" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.519714 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} err="failed to get container status \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": rpc error: code = NotFound desc = could not find container \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": container with ID starting with 3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163 not found: ID does not exist" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.519754 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: E0127 18:11:55.523757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": container with ID starting with a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad not found: ID does not exist" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523846 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523843 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} err="failed to get container status \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": rpc error: code = NotFound desc = could not find container \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": container with ID starting with a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad not found: ID does not exist" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523874 4907 scope.go:117] "RemoveContainer" containerID="cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.531057 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.536467 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.540521 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.545805 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.548591 4907 scope.go:117] "RemoveContainer" containerID="ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.548953 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.560531 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.560609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.565282 4907 scope.go:117] "RemoveContainer" containerID="be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.580338 4907 scope.go:117] "RemoveContainer" containerID="c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.593330 4907 scope.go:117] "RemoveContainer" containerID="a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.622364 4907 scope.go:117] "RemoveContainer" containerID="654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.754095 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" path="/var/lib/kubelet/pods/1f9526ea-3ca9-4727-aadd-3103419511d9/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.754903 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" path="/var/lib/kubelet/pods/1fb72397-1fbe-4f9d-976a-19ca15b2da2c/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.755369 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" path="/var/lib/kubelet/pods/7c7f1204-674f-4d4e-a695-28b2d0956b32/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.755971 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" path="/var/lib/kubelet/pods/7ee8faea-87ec-4620-b6a8-db398d35039a/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.756650 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" path="/var/lib/kubelet/pods/b7c08430-9a0d-4699-8521-3ee5c774ceab/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.757447 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee6d631-48d1-4137-9736-c028fb27e655" path="/var/lib/kubelet/pods/dee6d631-48d1-4137-9736-c028fb27e655/volumes" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.147042 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: E0127 18:11:56.148325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148348 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: E0127 18:11:56.148359 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148366 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148486 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148502 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.149368 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.153146 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.154106 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285451 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.352173 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.353226 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.360264 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.362898 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386781 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386828 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386859 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.387659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.387729 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.409015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.467863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" event={"ID":"48e5b57d-d01a-441e-beac-ef5e5d74dbc1","Type":"ContainerStarted","Data":"dbb010dcd85aacf9f3dabdbbd2ddbadc6ea10bcdc340a8c793b5232b1a3e3277"} Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.468951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.472417 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.480694 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.490857 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.491651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.515187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.521430 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.521500 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.522225 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podStartSLOduration=3.522203176 podStartE2EDuration="3.522203176s" podCreationTimestamp="2026-01-27 18:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:56.494254532 +0000 UTC m=+371.623537174" watchObservedRunningTime="2026-01-27 18:11:56.522203176 +0000 UTC m=+371.651485798" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.676988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.904431 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: W0127 18:11:56.907730 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec7dee3_a9ee_4bb8_b444_899c120854a7.slice/crio-647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0 WatchSource:0}: Error finding container 647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0: Status 404 returned error can't find the container with id 647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.104247 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:57 crc kubenswrapper[4907]: W0127 18:11:57.114667 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc8a6bd_6efd_4f2d_89f5_0ceb2441efee.slice/crio-da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f WatchSource:0}: Error finding container da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f: Status 404 returned error can't find the container with id da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487420 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerID="ca90b7d665b9701398c83ce1968de2c0817cf0dde4163aea9d60792056f97329" exitCode=0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487628 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerDied","Data":"ca90b7d665b9701398c83ce1968de2c0817cf0dde4163aea9d60792056f97329"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487682 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerStarted","Data":"647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491769 4907 generic.go:334] "Generic (PLEG): container finished" podID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerID="27955dbca42b3a8f1a7aff4e83fce49ec3898fcf11a027de65f125acb5a1b02f" exitCode=0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491865 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerDied","Data":"27955dbca42b3a8f1a7aff4e83fce49ec3898fcf11a027de65f125acb5a1b02f"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerStarted","Data":"da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f"} Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.536748 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.538852 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.542147 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.545438 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622765 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622928 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724050 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724143 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.725139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.725168 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.739837 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.741020 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.744402 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.745613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.750105 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.855664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.927398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.928823 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.929062 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030117 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030260 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030289 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030901 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030975 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.051991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.075452 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.084759 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:59 crc kubenswrapper[4907]: W0127 18:11:59.089421 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae6221e_526b_4cc4_9f9b_1079238c9100.slice/crio-9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5 WatchSource:0}: Error finding container 9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5: Status 404 returned error can't find the container with id 9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.493177 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:59 crc kubenswrapper[4907]: W0127 18:11:59.499525 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdf800ed_f5e8_4478_9e7a_98c7c95c7c52.slice/crio-c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d WatchSource:0}: Error finding container c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d: Status 404 returned error can't find the container with id c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503837 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4" exitCode=0 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503895 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4"} Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerStarted","Data":"9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5"} Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.511673 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerID="e68013733679d8e7f3be167dec106b649367fb7c36522d88ada5adc70676933c" exitCode=0 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.511707 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerDied","Data":"e68013733679d8e7f3be167dec106b649367fb7c36522d88ada5adc70676933c"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.520372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerStarted","Data":"cc7da9e386977f5b19150dca1e503ede1a777a4454d7f9fa64b03a15715fd90e"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522903 4907 generic.go:334] "Generic (PLEG): container finished" podID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerID="b7266f750cde318d9c3f62b27d5f6047c4fea9efc35c551ff041bb8284b03a09" exitCode=0 Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerDied","Data":"b7266f750cde318d9c3f62b27d5f6047c4fea9efc35c551ff041bb8284b03a09"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.537679 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wz7rn" podStartSLOduration=2.060381655 podStartE2EDuration="4.537662497s" podCreationTimestamp="2026-01-27 18:11:56 +0000 UTC" firstStartedPulling="2026-01-27 18:11:57.489734174 +0000 UTC m=+372.619016776" lastFinishedPulling="2026-01-27 18:11:59.967015006 +0000 UTC m=+375.096297618" observedRunningTime="2026-01-27 18:12:00.536524003 +0000 UTC m=+375.665806615" watchObservedRunningTime="2026-01-27 18:12:00.537662497 +0000 UTC m=+375.666945109" Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.530377 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597"} Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.533088 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87" exitCode=0 Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.533146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.541213 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerStarted","Data":"e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.542761 4907 generic.go:334] "Generic (PLEG): container finished" podID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerID="6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597" exitCode=0 Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.542784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerDied","Data":"6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.560811 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhv2c" podStartSLOduration=2.084805339 podStartE2EDuration="4.560789571s" podCreationTimestamp="2026-01-27 18:11:58 +0000 UTC" firstStartedPulling="2026-01-27 18:11:59.507967295 +0000 UTC m=+374.637249907" lastFinishedPulling="2026-01-27 18:12:01.983951527 +0000 UTC m=+377.113234139" observedRunningTime="2026-01-27 18:12:02.559032679 +0000 UTC m=+377.688315281" watchObservedRunningTime="2026-01-27 18:12:02.560789571 +0000 UTC m=+377.690072183" Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.548868 4907 generic.go:334] "Generic (PLEG): container finished" podID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerID="e6c31016bbde6bf87c1c86e3bdb5686fcee00d9e883d03acd23cb8341dbc91a1" exitCode=0 Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.549328 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerDied","Data":"e6c31016bbde6bf87c1c86e3bdb5686fcee00d9e883d03acd23cb8341dbc91a1"} Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.553618 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"f0f254763b9ca17af73215d2bb760fbeaf3f219136b79a712b781739bbf3dec8"} Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.593590 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dv4j2" podStartSLOduration=3.173656443 podStartE2EDuration="5.593574383s" podCreationTimestamp="2026-01-27 18:11:58 +0000 UTC" firstStartedPulling="2026-01-27 18:12:00.524225741 +0000 UTC m=+375.653508353" lastFinishedPulling="2026-01-27 18:12:02.944143691 +0000 UTC m=+378.073426293" observedRunningTime="2026-01-27 18:12:03.591343238 +0000 UTC m=+378.720625870" watchObservedRunningTime="2026-01-27 18:12:03.593574383 +0000 UTC m=+378.722856995" Jan 27 18:12:04 crc kubenswrapper[4907]: I0127 18:12:04.560302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerStarted","Data":"18d1cc8984ef390f3edf73cb13f16ba0f43d8ca0c9235e5957bdec90d2ad82cb"} Jan 27 18:12:04 crc kubenswrapper[4907]: I0127 18:12:04.585025 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrcdt" podStartSLOduration=1.881814831 podStartE2EDuration="8.585002627s" podCreationTimestamp="2026-01-27 18:11:56 +0000 UTC" firstStartedPulling="2026-01-27 18:11:57.493425173 +0000 UTC m=+372.622707815" lastFinishedPulling="2026-01-27 18:12:04.196612989 +0000 UTC m=+379.325895611" observedRunningTime="2026-01-27 18:12:04.581447252 +0000 UTC m=+379.710729864" watchObservedRunningTime="2026-01-27 18:12:04.585002627 +0000 UTC m=+379.714285239" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.481491 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.481990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.525881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.609120 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.678128 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.678181 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.723197 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.855992 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.859289 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.908356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.076402 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.076458 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.118414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.633384 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.634971 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:16 crc kubenswrapper[4907]: I0127 18:12:16.729044 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.060094 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.065713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.074107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.074829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075163 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075353 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075829 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.078630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176012 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176096 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.276897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.277049 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.277082 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.278908 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.297985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.298050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.407187 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.521105 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.521544 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.824793 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:27 crc kubenswrapper[4907]: I0127 18:12:27.689212 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" event={"ID":"1f06e513-6675-48e3-a197-46a4df6eb319","Type":"ContainerStarted","Data":"b75c64b04339409bf659d0d6090c83d07ef71d5c949ecc4c3f5e577280edd415"} Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.240337 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.241907 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.259728 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.346669 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.347404 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.349239 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-tmmcx" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.349255 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.362040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422766 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422948 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.423080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.446054 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524494 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524695 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524781 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.525161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.526356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.526356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.531417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.531988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.543551 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.548984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.555909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.626169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.629849 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.661313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.703286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" event={"ID":"1f06e513-6675-48e3-a197-46a4df6eb319","Type":"ContainerStarted","Data":"eae8c4368e113dc5cfc98ba0fdba645e199a4c98b9caa3fda35c3328edd06594"} Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.725764 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" podStartSLOduration=1.863310056 podStartE2EDuration="3.725740501s" podCreationTimestamp="2026-01-27 18:12:26 +0000 UTC" firstStartedPulling="2026-01-27 18:12:26.835891505 +0000 UTC m=+401.965174127" lastFinishedPulling="2026-01-27 18:12:28.69832196 +0000 UTC m=+403.827604572" observedRunningTime="2026-01-27 18:12:29.720451816 +0000 UTC m=+404.849734438" watchObservedRunningTime="2026-01-27 18:12:29.725740501 +0000 UTC m=+404.855023113" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.984333 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: W0127 18:12:29.992994 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21da9305_e6ab_4378_b316_7a3ffc47faa0.slice/crio-c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520 WatchSource:0}: Error finding container c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520: Status 404 returned error can't find the container with id c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520 Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.145490 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:30 crc kubenswrapper[4907]: W0127 18:12:30.150758 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddccc085e_3aae_4c8e_8737_699c60063730.slice/crio-7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171 WatchSource:0}: Error finding container 7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171: Status 404 returned error can't find the container with id 7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171 Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.716681 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" event={"ID":"21da9305-e6ab-4378-b316-7a3ffc47faa0","Type":"ContainerStarted","Data":"934cf5e1d8502686f370ff0045b6810381dc76e84b21ec8bf75fd4935698431a"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.716779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" event={"ID":"21da9305-e6ab-4378-b316-7a3ffc47faa0","Type":"ContainerStarted","Data":"c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.719040 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.719235 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" event={"ID":"dccc085e-3aae-4c8e-8737-699c60063730","Type":"ContainerStarted","Data":"7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.753464 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podStartSLOduration=1.7534291419999999 podStartE2EDuration="1.753429142s" podCreationTimestamp="2026-01-27 18:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:12:30.749466355 +0000 UTC m=+405.878749037" watchObservedRunningTime="2026-01-27 18:12:30.753429142 +0000 UTC m=+405.882711794" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.731777 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" event={"ID":"dccc085e-3aae-4c8e-8737-699c60063730","Type":"ContainerStarted","Data":"2297d43ccd2b3db8afba28eb41fd1f0131b6efd680916504deeef3e0cb335554"} Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.732258 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.745656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.750866 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podStartSLOduration=2.265469189 podStartE2EDuration="3.750803902s" podCreationTimestamp="2026-01-27 18:12:29 +0000 UTC" firstStartedPulling="2026-01-27 18:12:30.154840998 +0000 UTC m=+405.284123610" lastFinishedPulling="2026-01-27 18:12:31.640175711 +0000 UTC m=+406.769458323" observedRunningTime="2026-01-27 18:12:32.750292437 +0000 UTC m=+407.879575089" watchObservedRunningTime="2026-01-27 18:12:32.750803902 +0000 UTC m=+407.880086544" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.456592 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.457960 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.461199 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.461978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.462158 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.462397 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-4v6tr" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.468945 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.611938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612014 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612216 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713695 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.714995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.721296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.721830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.739297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.786741 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:34 crc kubenswrapper[4907]: I0127 18:12:34.193597 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:34 crc kubenswrapper[4907]: W0127 18:12:34.205138 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccd008a1_f6c9_49ea_8b56_893754445191.slice/crio-c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea WatchSource:0}: Error finding container c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea: Status 404 returned error can't find the container with id c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea Jan 27 18:12:34 crc kubenswrapper[4907]: I0127 18:12:34.746657 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.777388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"08a72d1dd6e8154c2406e8b35441938326365f0a0ffa8e3d46afaa2499c200fa"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.778429 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"81fc3fc75dbf697385c9108d4812174a5704a4363b5a2fc6add7df7a4a141e0a"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.804165 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" podStartSLOduration=2.31659018 podStartE2EDuration="3.804136728s" podCreationTimestamp="2026-01-27 18:12:33 +0000 UTC" firstStartedPulling="2026-01-27 18:12:34.207386432 +0000 UTC m=+409.336669054" lastFinishedPulling="2026-01-27 18:12:35.69493299 +0000 UTC m=+410.824215602" observedRunningTime="2026-01-27 18:12:36.800396689 +0000 UTC m=+411.929679341" watchObservedRunningTime="2026-01-27 18:12:36.804136728 +0000 UTC m=+411.933419380" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.791980 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.794157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.840420 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.840593 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.841399 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-q6vpk" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.845115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.880713 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.892083 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.893669 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894547 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894749 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894892 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894956 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895786 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-2pnkk" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.904945 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-nln57"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.908748 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.913691 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9qfxp" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.914353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.914418 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.996985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997174 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997222 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997256 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997512 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997971 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998037 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998096 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.003958 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.013319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.028395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099777 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099807 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099832 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099908 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099956 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099982 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100022 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100059 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100082 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: E0127 18:12:39.101393 4907 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 27 18:12:39 crc kubenswrapper[4907]: E0127 18:12:39.101500 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls podName:2b64919b-b6e7-4cc9-a40a-a22ac0022126 nodeName:}" failed. No retries permitted until 2026-01-27 18:12:39.601468351 +0000 UTC m=+414.730751023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls") pod "node-exporter-nln57" (UID: "2b64919b-b6e7-4cc9-a40a-a22ac0022126") : secret "node-exporter-tls" not found Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102004 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102394 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.103128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.104355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.104900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.112108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.120891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.128260 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.156685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.219600 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.569620 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.575267 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3227578e_bf46_482d_bc81_33cf9f5e45e9.slice/crio-eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a WatchSource:0}: Error finding container eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a: Status 404 returned error can't find the container with id eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.605651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.610375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.663327 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.667981 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod109f2f0b_779e_4070_842b_eb81187fb12a.slice/crio-94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa WatchSource:0}: Error finding container 94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa: Status 404 returned error can't find the container with id 94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.803943 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.805689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"016d393dc785c09f48b7b5f0119a98cb6f2a5b8ce44ba848cab9804651fee169"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.805713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.829747 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.862670 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b64919b_b6e7_4cc9_a40a_a22ac0022126.slice/crio-9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e WatchSource:0}: Error finding container 9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e: Status 404 returned error can't find the container with id 9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.925215 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.927397 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.929957 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.930480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931266 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931445 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931828 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932432 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-dxz4f" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932592 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.939062 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.958199 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014140 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014286 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014323 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014347 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014404 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115465 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115536 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115637 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115670 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115707 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115749 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115768 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.116082 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.116879 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.117737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.121339 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.121958 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.122311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.125738 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.130161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.137065 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.137243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.150366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.156126 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.252176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.704635 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:40 crc kubenswrapper[4907]: W0127 18:12:40.715879 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3797d1fb_c1cc_4ee9_8a90_7f26906ce9b2.slice/crio-8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793 WatchSource:0}: Error finding container 8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793: Status 404 returned error can't find the container with id 8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793 Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.815884 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"9473c7e6ca38a2d926fa16dcc9b660e365e87cfa1fbd8d7b413352d1f5b8f080"} Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.817543 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e"} Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.818806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793"} Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.899819 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.902162 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907314 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4r7ll" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907551 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907825 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bv9306prcgika" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907960 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908102 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908277 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908405 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.924318 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.048977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049602 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049690 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150451 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150505 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150597 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150662 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.152707 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170209 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.171964 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173312 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.311929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.718134 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:42 crc kubenswrapper[4907]: W0127 18:12:42.777164 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0f501d_4ce7_4268_b84c_71e7a8a1b430.slice/crio-6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b WatchSource:0}: Error finding container 6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b: Status 404 returned error can't find the container with id 6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.834494 4907 generic.go:334] "Generic (PLEG): container finished" podID="2b64919b-b6e7-4cc9-a40a-a22ac0022126" containerID="c8aa0ff9df88cc288ddd843ef0af1dfb14ce08b2f8404a7f5ef883b301c43d13" exitCode=0 Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.834581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerDied","Data":"c8aa0ff9df88cc288ddd843ef0af1dfb14ce08b2f8404a7f5ef883b301c43d13"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.836714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839908 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"0e23e36f48a50c2f1d580245a787292935d13f4bb4d433040a6221a645e2a7e0"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"55997bbef9b29ca17253903a775af415c0e3c86ade9a43d98530733be6b1f4d4"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839989 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"883f52da2c95e11b3b1cab303073666f928e30575ea0fb2adbb5306f48bb64f3"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.841966 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"95c105599a07d32955429425e0d2619c332b14f23ed4aaa8a44830a4ae31b4be"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.879666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" podStartSLOduration=2.733910545 podStartE2EDuration="4.879632803s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.671006283 +0000 UTC m=+414.800288895" lastFinishedPulling="2026-01-27 18:12:41.816728541 +0000 UTC m=+416.946011153" observedRunningTime="2026-01-27 18:12:42.874582575 +0000 UTC m=+418.003865207" watchObservedRunningTime="2026-01-27 18:12:42.879632803 +0000 UTC m=+418.008915415" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.896805 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" podStartSLOduration=2.95106769 podStartE2EDuration="4.896783606s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.883603574 +0000 UTC m=+415.012886186" lastFinishedPulling="2026-01-27 18:12:41.82931949 +0000 UTC m=+416.958602102" observedRunningTime="2026-01-27 18:12:42.889660597 +0000 UTC m=+418.018943209" watchObservedRunningTime="2026-01-27 18:12:42.896783606 +0000 UTC m=+418.026066218" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.619422 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.620816 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.633173 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675898 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675956 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676006 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676227 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676279 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.777907 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.777974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778039 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778109 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779032 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779721 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779878 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.791212 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.791268 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.801976 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.852096 4907 generic.go:334] "Generic (PLEG): container finished" podID="3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2" containerID="097827d617ad4ae0c94e4ccbd283e7db0f702edbe88460a91fcdaf07777118bb" exitCode=0 Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.852181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerDied","Data":"097827d617ad4ae0c94e4ccbd283e7db0f702edbe88460a91fcdaf07777118bb"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.863106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"b3301bc3c3088b9f7bd4f5c54730f86c7ee33e1f528931fddeedfafb364a3976"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.863154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"695bf4485cfe4386e544433043e145ed5c5f9b6e9e45819065bfbd668646e2c6"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.940752 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.026209 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-nln57" podStartSLOduration=4.0647906 podStartE2EDuration="6.026182296s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.864775172 +0000 UTC m=+414.994057784" lastFinishedPulling="2026-01-27 18:12:41.826166838 +0000 UTC m=+416.955449480" observedRunningTime="2026-01-27 18:12:43.908156497 +0000 UTC m=+419.037439109" watchObservedRunningTime="2026-01-27 18:12:44.026182296 +0000 UTC m=+419.155464908" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.027898 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.028838 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.030840 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.031044 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wdtxr" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.031256 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032495 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-bo2gg3s7etg0k" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032851 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032931 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.037264 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.089632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090111 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090270 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191446 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191569 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191656 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.192871 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.193204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.194396 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.197665 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.198340 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.201724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.206984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.356480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.363929 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:44 crc kubenswrapper[4907]: W0127 18:12:44.376133 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbb41873_fa83_4786_b31d_d0d3ebeb902b.slice/crio-b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954 WatchSource:0}: Error finding container b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954: Status 404 returned error can't find the container with id b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954 Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.607044 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.608204 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.615319 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.615568 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.634720 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.724388 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.826342 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.845966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.873848 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.874324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerStarted","Data":"b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954"} Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.935447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.109609 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.111513 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115886 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115953 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115892 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-fj9lk" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116045 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116149 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116601 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116827 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.118847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119097 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38cbjc522925s" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119231 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.122913 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.125881 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.140418 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236292 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236407 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236793 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236840 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236858 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237031 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237150 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237182 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.338940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.338997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339013 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339734 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339829 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339924 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339962 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340021 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340064 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340095 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340122 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340300 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340941 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.341000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.342829 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343620 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343759 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343875 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.344653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.344814 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.345837 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.346190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.348319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.348775 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.351598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.352979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.354219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.363888 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.431935 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: W0127 18:12:45.581176 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod562a795f_c556_42b2_a9a3_0baf8b3ce4c5.slice/crio-bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7 WatchSource:0}: Error finding container bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7: Status 404 returned error can't find the container with id bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7 Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.879550 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerStarted","Data":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.881836 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7"} Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.911356 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cc8bd7b4-59b72" podStartSLOduration=2.911338498 podStartE2EDuration="2.911338498s" podCreationTimestamp="2026-01-27 18:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:12:45.902910811 +0000 UTC m=+421.032193453" watchObservedRunningTime="2026-01-27 18:12:45.911338498 +0000 UTC m=+421.040621110" Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.631726 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:47 crc kubenswrapper[4907]: W0127 18:12:46.642388 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e1c70a_dd32_4bc6_b7ec_6ec039441440.slice/crio-e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616 WatchSource:0}: Error finding container e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616: Status 404 returned error can't find the container with id e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616 Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.700599 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:47 crc kubenswrapper[4907]: W0127 18:12:46.708634 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadac6b31_6901_4af8_bc21_648d56318021.slice/crio-63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a WatchSource:0}: Error finding container 63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a: Status 404 returned error can't find the container with id 63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.888959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"86d8aca05cc94e034fdff91a725c05e14cdeb6fa45a159e588121c06df631aa4"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"e57238a4dc0b2bf79560cd3ffc972cbfa338354c5cec19a0c81fcbb5500021a2"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891292 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"5c81680fb82ab7c135ed6a153161c9b9474d5720de0fdea0a5722a44d0eb1c1d"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893685 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"e3ad205c23867f185c2e09b4e8c6533cfc4d08ccdd7ef6d843d4705904d1f7bc"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893716 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"b37e225d07feea62e890d4bbb3defe3271e136435ecbd7291aa1b68d717cfce2"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"580714ab4bc2ccd8a2e2c1ee20846bece04c1324e80977af2b1e74a967743427"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.895839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" event={"ID":"c3e1c70a-dd32-4bc6-b7ec-6ec039441440","Type":"ContainerStarted","Data":"e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.903938 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.908883 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"6c6648fc73dd44026854491676f0287794c631aaa5473d5f10b9dc2d38387ee5"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.908921 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"bd555f91fe21e2df39af47867452d32cffd382d952c9ba6793be28d5c0880d7d"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.910322 4907 generic.go:334] "Generic (PLEG): container finished" podID="adac6b31-6901-4af8-bc21-648d56318021" containerID="02ccd677bcd49803979431c60b8f1e6b7bf742c20502fcaf097fbad7c4954043" exitCode=0 Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.910354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerDied","Data":"02ccd677bcd49803979431c60b8f1e6b7bf742c20502fcaf097fbad7c4954043"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.929366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podStartSLOduration=2.000238054 podStartE2EDuration="3.929348133s" podCreationTimestamp="2026-01-27 18:12:44 +0000 UTC" firstStartedPulling="2026-01-27 18:12:45.584471348 +0000 UTC m=+420.713753960" lastFinishedPulling="2026-01-27 18:12:47.513581427 +0000 UTC m=+422.642864039" observedRunningTime="2026-01-27 18:12:47.925521931 +0000 UTC m=+423.054804533" watchObservedRunningTime="2026-01-27 18:12:47.929348133 +0000 UTC m=+423.058630745" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.923952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"5c8da3d7b99c2462d65efd5ce25dba6c9a9704d349cc448416d200e5c82f8f70"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.924022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"d78684e2f24815421491d90d7518e0c87348b62d375d0c1b71f809f76bced033"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.924042 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"19f9c8ab6f2eacc00cf5c489539bb71163bd6f8fcc4e369835d6f113d2e813fd"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.925526 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.927422 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" event={"ID":"c3e1c70a-dd32-4bc6-b7ec-6ec039441440","Type":"ContainerStarted","Data":"aac7a9fe1993ca66ad15cfca52536cf84b72c871cd832f1e6ff443b5ba4b645e"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.927917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.932899 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"f8f651c635638588e16f63d265adea2c5991e04bb1d17d182ece1adbfd43a08a"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.936854 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.977415 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podStartSLOduration=2.69903117 podStartE2EDuration="7.97738414s" podCreationTimestamp="2026-01-27 18:12:41 +0000 UTC" firstStartedPulling="2026-01-27 18:12:42.779087137 +0000 UTC m=+417.908369749" lastFinishedPulling="2026-01-27 18:12:48.057440097 +0000 UTC m=+423.186722719" observedRunningTime="2026-01-27 18:12:48.951234683 +0000 UTC m=+424.080517285" watchObservedRunningTime="2026-01-27 18:12:48.97738414 +0000 UTC m=+424.106666762" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.978361 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podStartSLOduration=3.081030531 podStartE2EDuration="4.978355388s" podCreationTimestamp="2026-01-27 18:12:44 +0000 UTC" firstStartedPulling="2026-01-27 18:12:46.653386217 +0000 UTC m=+421.782668829" lastFinishedPulling="2026-01-27 18:12:48.550711074 +0000 UTC m=+423.679993686" observedRunningTime="2026-01-27 18:12:48.970391825 +0000 UTC m=+424.099674437" watchObservedRunningTime="2026-01-27 18:12:48.978355388 +0000 UTC m=+424.107638010" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.019846 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.68465797 podStartE2EDuration="10.019819443s" podCreationTimestamp="2026-01-27 18:12:39 +0000 UTC" firstStartedPulling="2026-01-27 18:12:40.719441511 +0000 UTC m=+415.848724123" lastFinishedPulling="2026-01-27 18:12:48.054602994 +0000 UTC m=+423.183885596" observedRunningTime="2026-01-27 18:12:49.002101814 +0000 UTC m=+424.131384436" watchObservedRunningTime="2026-01-27 18:12:49.019819443 +0000 UTC m=+424.149102055" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.564164 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.620009 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:12:51 crc kubenswrapper[4907]: I0127 18:12:51.957951 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"bab7ad455d7bcf8b8025daa37f484260be75fe10219e39d69fc8ef2d0dbd2fce"} Jan 27 18:12:51 crc kubenswrapper[4907]: I0127 18:12:51.958721 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"35244a64ccb45d6463063a0944dd1016d6de399355b4f19dea17b96a6ad3cce6"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.506310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969535 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"4c07ce8aa6ee6b5b4fdaf35a5bdd25f70ebdb0d3860364128e7441387a136da3"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"a697b53b00cc7778188e6ab6a600c50f8590251dc7c18cca7a4e3664161240d3"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"a7fdd7a2b3898b5812ae7617f00929a7e55cf645a4b616af41291eb13219c945"} Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.942239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.942291 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.947365 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.981469 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"6551ae61669b66c95152146db91aefaba49601583377f5e60a60e80a5da520e3"} Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.984852 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:54 crc kubenswrapper[4907]: I0127 18:12:54.019383 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.371568521 podStartE2EDuration="9.019211939s" podCreationTimestamp="2026-01-27 18:12:45 +0000 UTC" firstStartedPulling="2026-01-27 18:12:47.913357524 +0000 UTC m=+423.042640136" lastFinishedPulling="2026-01-27 18:12:51.561000932 +0000 UTC m=+426.690283554" observedRunningTime="2026-01-27 18:12:54.014850571 +0000 UTC m=+429.144133203" watchObservedRunningTime="2026-01-27 18:12:54.019211939 +0000 UTC m=+429.148494561" Jan 27 18:12:54 crc kubenswrapper[4907]: I0127 18:12:54.078129 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:12:55 crc kubenswrapper[4907]: I0127 18:12:55.432881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522158 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522279 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522369 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.523458 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.523643 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" gracePeriod=600 Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007149 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" exitCode=0 Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007488 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007516 4907 scope.go:117] "RemoveContainer" containerID="f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" Jan 27 18:13:04 crc kubenswrapper[4907]: I0127 18:13:04.358051 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:04 crc kubenswrapper[4907]: I0127 18:13:04.358840 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:14 crc kubenswrapper[4907]: I0127 18:13:14.665313 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" containerID="cri-o://7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" gracePeriod=30 Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.172743 4907 generic.go:334] "Generic (PLEG): container finished" podID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerID="7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" exitCode=0 Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.172907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerDied","Data":"7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4"} Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.173216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerDied","Data":"c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1"} Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.173242 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.193447 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213815 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213849 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213890 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213926 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.214001 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.214029 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.216416 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.229213 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.229627 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.231543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88" (OuterVolumeSpecName: "kube-api-access-b9t88") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "kube-api-access-b9t88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.245659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.246075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.250975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.264211 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.316761 4907 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317116 4907 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317129 4907 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317141 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317153 4907 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317165 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317177 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.180062 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.207097 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.215538 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:13:17 crc kubenswrapper[4907]: I0127 18:13:17.763473 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" path="/var/lib/kubelet/pods/c85caecd-2eec-479e-82a3-2ac3c53c79c6/volumes" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.153071 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-grwdr" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" containerID="cri-o://a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" gracePeriod=15 Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.583410 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-grwdr_c40070fe-7a8d-4f73-ad68-7e0a36680906/console/0.log" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.583728 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688540 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688587 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688663 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688688 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688713 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688761 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.689743 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config" (OuterVolumeSpecName: "console-config") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.689759 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.690255 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.690372 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca" (OuterVolumeSpecName: "service-ca") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695120 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695360 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695500 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd" (OuterVolumeSpecName: "kube-api-access-m25qd") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "kube-api-access-m25qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790418 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790907 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790918 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790927 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790935 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790945 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790955 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213130 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-grwdr_c40070fe-7a8d-4f73-ad68-7e0a36680906/console/0.log" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213198 4907 generic.go:334] "Generic (PLEG): container finished" podID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" exitCode=2 Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213239 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerDied","Data":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerDied","Data":"4f4e687b11dd2ca7eb21e3c540aa81cbaa9c488161aa4b888533995942e8fa1a"} Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213297 4907 scope.go:117] "RemoveContainer" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213344 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.241239 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.246932 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.250083 4907 scope.go:117] "RemoveContainer" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: E0127 18:13:20.250640 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": container with ID starting with a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43 not found: ID does not exist" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.250686 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} err="failed to get container status \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": rpc error: code = NotFound desc = could not find container \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": container with ID starting with a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43 not found: ID does not exist" Jan 27 18:13:21 crc kubenswrapper[4907]: I0127 18:13:21.760492 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" path="/var/lib/kubelet/pods/c40070fe-7a8d-4f73-ad68-7e0a36680906/volumes" Jan 27 18:13:24 crc kubenswrapper[4907]: I0127 18:13:24.364796 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:24 crc kubenswrapper[4907]: I0127 18:13:24.370216 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:45 crc kubenswrapper[4907]: I0127 18:13:45.432620 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:13:45 crc kubenswrapper[4907]: I0127 18:13:45.483302 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:13:46 crc kubenswrapper[4907]: I0127 18:13:46.481235 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.670104 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:28 crc kubenswrapper[4907]: E0127 18:14:28.671306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671323 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: E0127 18:14:28.671346 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671355 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671493 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671513 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.672086 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.688037 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712809 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712982 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.813879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.814792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.814956 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815047 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815153 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.820830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.824504 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.832319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.991087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.297197 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.971799 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerStarted","Data":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.971860 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerStarted","Data":"d22d0be7c5012debcbe1ac6b1b934a7244865eb06d8f858be9fb3384ddfdb6a5"} Jan 27 18:14:30 crc kubenswrapper[4907]: I0127 18:14:30.003545 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b447cd8-v5z5k" podStartSLOduration=2.003512742 podStartE2EDuration="2.003512742s" podCreationTimestamp="2026-01-27 18:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:14:29.999792593 +0000 UTC m=+525.129075235" watchObservedRunningTime="2026-01-27 18:14:30.003512742 +0000 UTC m=+525.132795384" Jan 27 18:14:38 crc kubenswrapper[4907]: I0127 18:14:38.992246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:38 crc kubenswrapper[4907]: I0127 18:14:38.993728 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.000224 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.057585 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.149197 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:14:46 crc kubenswrapper[4907]: I0127 18:14:46.312882 4907 scope.go:117] "RemoveContainer" containerID="dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718" Jan 27 18:14:46 crc kubenswrapper[4907]: I0127 18:14:46.350515 4907 scope.go:117] "RemoveContainer" containerID="7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" Jan 27 18:14:56 crc kubenswrapper[4907]: I0127 18:14:56.521272 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:14:56 crc kubenswrapper[4907]: I0127 18:14:56.522168 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.208260 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.209667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.216180 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.216350 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.235008 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.357954 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.358044 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.358210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459538 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459617 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.461276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.467755 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.477646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.535645 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.784886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.226664 4907 generic.go:334] "Generic (PLEG): container finished" podID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerID="18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191" exitCode=0 Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.226741 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerDied","Data":"18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191"} Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.227084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerStarted","Data":"b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6"} Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.522646 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689003 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689094 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.690586 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume" (OuterVolumeSpecName: "config-volume") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.695165 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d" (OuterVolumeSpecName: "kube-api-access-rfb7d") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "kube-api-access-rfb7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.695473 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.790965 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.791469 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.791670 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerDied","Data":"b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6"} Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243414 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6" Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.216747 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cc8bd7b4-59b72" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" containerID="cri-o://9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" gracePeriod=15 Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.586509 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cc8bd7b4-59b72_bbb41873-fa83-4786-b31d-d0d3ebeb902b/console/0.log" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.586602 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728010 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728071 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728101 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728892 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config" (OuterVolumeSpecName: "console-config") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729061 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729094 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729145 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729186 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729228 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729507 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729524 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729536 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.730062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca" (OuterVolumeSpecName: "service-ca") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.733219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq" (OuterVolumeSpecName: "kube-api-access-hz5wq") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "kube-api-access-hz5wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.733309 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.734065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.830689 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831028 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831096 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831167 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262336 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cc8bd7b4-59b72_bbb41873-fa83-4786-b31d-d0d3ebeb902b/console/0.log" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262425 4907 generic.go:334] "Generic (PLEG): container finished" podID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" exitCode=2 Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerDied","Data":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerDied","Data":"b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954"} Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262541 4907 scope.go:117] "RemoveContainer" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262748 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.297024 4907 scope.go:117] "RemoveContainer" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: E0127 18:15:05.298022 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": container with ID starting with 9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2 not found: ID does not exist" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.298138 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} err="failed to get container status \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": rpc error: code = NotFound desc = could not find container \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": container with ID starting with 9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2 not found: ID does not exist" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.316498 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.321773 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.760357 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" path="/var/lib/kubelet/pods/bbb41873-fa83-4786-b31d-d0d3ebeb902b/volumes" Jan 27 18:15:26 crc kubenswrapper[4907]: I0127 18:15:26.522368 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:15:26 crc kubenswrapper[4907]: I0127 18:15:26.523240 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:46 crc kubenswrapper[4907]: I0127 18:15:46.422915 4907 scope.go:117] "RemoveContainer" containerID="ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" Jan 27 18:15:46 crc kubenswrapper[4907]: I0127 18:15:46.437930 4907 scope.go:117] "RemoveContainer" containerID="4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.521206 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.522256 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.522333 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.523432 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.523716 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" gracePeriod=600 Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716136 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" exitCode=0 Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716309 4907 scope.go:117] "RemoveContainer" containerID="42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" Jan 27 18:15:57 crc kubenswrapper[4907]: I0127 18:15:57.727206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.427547 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:08 crc kubenswrapper[4907]: E0127 18:17:08.428496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428509 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: E0127 18:17:08.428518 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428525 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428665 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428674 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.429487 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.432547 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.439005 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499259 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601165 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601393 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601442 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.602282 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.602379 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.627672 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.786643 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.992014 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:09 crc kubenswrapper[4907]: I0127 18:17:09.280003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerStarted","Data":"1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341"} Jan 27 18:17:09 crc kubenswrapper[4907]: I0127 18:17:09.280066 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerStarted","Data":"c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5"} Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.291034 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341" exitCode=0 Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.291164 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341"} Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.294479 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:17:12 crc kubenswrapper[4907]: I0127 18:17:12.315423 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="5e3533c5a2464e64fa9ee0b2262a8a3b1226d6e695a8ded74e370605957f71ef" exitCode=0 Jan 27 18:17:12 crc kubenswrapper[4907]: I0127 18:17:12.315527 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"5e3533c5a2464e64fa9ee0b2262a8a3b1226d6e695a8ded74e370605957f71ef"} Jan 27 18:17:13 crc kubenswrapper[4907]: I0127 18:17:13.325585 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="8d3691680db889fd3cf4dd81427e8fe95ca47d5ddb14a685ad212084be35cc2d" exitCode=0 Jan 27 18:17:13 crc kubenswrapper[4907]: I0127 18:17:13.325643 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"8d3691680db889fd3cf4dd81427e8fe95ca47d5ddb14a685ad212084be35cc2d"} Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.721268 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796430 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796652 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796678 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.798578 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle" (OuterVolumeSpecName: "bundle") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.807479 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5" (OuterVolumeSpecName: "kube-api-access-f8zq5") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "kube-api-access-f8zq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.847351 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util" (OuterVolumeSpecName: "util") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898001 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898047 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898066 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350736 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5"} Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350803 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5" Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350929 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.189971 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190733 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190745 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190763 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="util" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190769 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="util" Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190781 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="pull" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190788 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="pull" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190888 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.191388 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.193751 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.194219 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7677k" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.199920 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.204995 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.205786 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.213214 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.214483 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.219013 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-bkbgx" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.221894 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.222683 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.226464 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.241607 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.283975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284119 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385670 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385769 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.396355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.397282 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.400995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.400995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.405318 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.406061 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.406134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.408406 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.408420 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-85xln" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.428857 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.487284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.487369 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.514808 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.540113 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.563767 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.590328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.590422 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.598598 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.599369 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.600484 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.602462 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jxxzz" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.609151 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.615372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.691860 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.691912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.790195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.797298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.797344 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.798879 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.823157 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.924503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: W0127 18:17:26.936058 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a068f6_1c40_4947_b9bd_3b018ddcb25b.slice/crio-256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48 WatchSource:0}: Error finding container 256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48: Status 404 returned error can't find the container with id 256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48 Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.948852 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.073935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:27 crc kubenswrapper[4907]: W0127 18:17:27.081347 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91eb4541_31f7_488a_ae31_d57bfa265442.slice/crio-ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24 WatchSource:0}: Error finding container ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24: Status 404 returned error can't find the container with id ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24 Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.104492 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.244344 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:27 crc kubenswrapper[4907]: W0127 18:17:27.252904 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99183c02_34c0_4a91_9e6e_0efd5d2a7a42.slice/crio-3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af WatchSource:0}: Error finding container 3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af: Status 404 returned error can't find the container with id 3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.271936 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.449134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"b7ccf81eeea2832b2057c4c1ffe73101b7b4c0075a4599a7452239bbd61e2e00"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.450262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" event={"ID":"91eb4541-31f7-488a-ae31-d57bfa265442","Type":"ContainerStarted","Data":"ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.451206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" event={"ID":"99183c02-34c0-4a91-9e6e-0efd5d2a7a42","Type":"ContainerStarted","Data":"3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.452204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" event={"ID":"d68ab367-2841-460c-b666-5b52ec455dd2","Type":"ContainerStarted","Data":"97485635ee4fefc2e26a2bba4980c89b490725c8eb1cfbe2f081c335d6bd9379"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.453059 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" event={"ID":"c1a068f6-1c40-4947-b9bd-3b018ddcb25b","Type":"ContainerStarted","Data":"256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.043903 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045137 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" containerID="cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045316 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" containerID="cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045349 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" containerID="cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045380 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" containerID="cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045414 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045476 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" containerID="cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045639 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" containerID="cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.087160 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" containerID="cri-o://5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.540485 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.543464 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544055 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544510 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544544 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544568 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544580 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544590 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" exitCode=143 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544599 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" exitCode=143 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544699 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544718 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544730 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544742 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544753 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544772 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.548798 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549212 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549249 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" exitCode=2 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549268 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549758 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:32 crc kubenswrapper[4907]: E0127 18:17:32.550079 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.153767 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62f5e7d_70be_4705_a4b0_d5e4f531cfde.slice/crio-2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.193790 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.193906 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194105 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194171 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194336 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194361 4907 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194501 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194524 4907 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.562081 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.562740 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563067 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" exitCode=0 Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563124 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" exitCode=0 Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563126 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20"} Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767"} Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.038761 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.601632 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.602178 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.995609 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.995988 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.996313 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068308 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068328 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068387 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068408 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068428 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068452 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068472 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068503 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068529 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068567 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068594 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068622 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068707 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068722 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068784 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068933 4907 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068974 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069005 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069022 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069423 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069468 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069922 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069989 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070330 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket" (OuterVolumeSpecName: "log-socket") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070392 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070419 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070454 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070483 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log" (OuterVolumeSpecName: "node-log") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070634 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash" (OuterVolumeSpecName: "host-slash") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070983 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.088509 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xx4s"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.089333 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.091660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q" (OuterVolumeSpecName: "kube-api-access-qkx4q") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "kube-api-access-qkx4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.091893 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.091974 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092058 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092132 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092206 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092286 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kubecfg-setup" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092433 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kubecfg-setup" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092504 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092631 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092715 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092781 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092855 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092917 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092982 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093043 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093111 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093181 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093381 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093457 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093528 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093611 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093681 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093892 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093976 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094049 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094118 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094194 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094259 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094330 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094401 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094476 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094543 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094630 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.094843 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.095144 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.097710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.103975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.171913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172197 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172242 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172511 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172534 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172585 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172606 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172682 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172743 4907 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172755 4907 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172771 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172784 4907 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172794 4907 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172803 4907 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172814 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172822 4907 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172832 4907 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172842 4907 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172854 4907 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172864 4907 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172882 4907 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172894 4907 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172902 4907 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172911 4907 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172919 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172931 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274186 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274252 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274279 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274345 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274382 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274404 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274443 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274502 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274518 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274694 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275132 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275788 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276207 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276324 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276344 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276638 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.281123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.300376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.537843 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: W0127 18:17:38.559942 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee97e15a_ebc3_4c61_9841_9c1fb43fdee7.slice/crio-78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976 WatchSource:0}: Error finding container 78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976: Status 404 returned error can't find the container with id 78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976 Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.615105 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.615989 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616469 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616588 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616695 4907 scope.go:117] "RemoveContainer" containerID="5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.618452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.618697 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.621427 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.623502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" event={"ID":"91eb4541-31f7-488a-ae31-d57bfa265442","Type":"ContainerStarted","Data":"ef5c3012247def2a8d08a76ca0df8bb6d046453fb54ac1c37ff4a1a99b0ae52c"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.626243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" event={"ID":"99183c02-34c0-4a91-9e6e-0efd5d2a7a42","Type":"ContainerStarted","Data":"2e27f133bb71f31801a29b81348785c06c151d02579560c08aff145ecdfbfd7e"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.626288 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.628388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" event={"ID":"d68ab367-2841-460c-b666-5b52ec455dd2","Type":"ContainerStarted","Data":"6ffd4f14e8e49430c199c48ac416b2c29aaa36c06475a4885b4cb1188b6e8017"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.629940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.631670 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" event={"ID":"c1a068f6-1c40-4947-b9bd-3b018ddcb25b","Type":"ContainerStarted","Data":"dc9b72d5182336e502b6892ee806e7a2caa695b0366e2eb155d131fbb2100f1b"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.654367 4907 scope.go:117] "RemoveContainer" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.655083 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.666389 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" podStartSLOduration=2.566762876 podStartE2EDuration="12.666371353s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:26.940342679 +0000 UTC m=+702.069625291" lastFinishedPulling="2026-01-27 18:17:37.039951156 +0000 UTC m=+712.169233768" observedRunningTime="2026-01-27 18:17:38.664371285 +0000 UTC m=+713.793653907" watchObservedRunningTime="2026-01-27 18:17:38.666371353 +0000 UTC m=+713.795653965" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.669129 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podStartSLOduration=1.980635576 podStartE2EDuration="12.669116473s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.281307583 +0000 UTC m=+702.410590195" lastFinishedPulling="2026-01-27 18:17:37.96978848 +0000 UTC m=+713.099071092" observedRunningTime="2026-01-27 18:17:38.639177165 +0000 UTC m=+713.768459777" watchObservedRunningTime="2026-01-27 18:17:38.669116473 +0000 UTC m=+713.798399085" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.678661 4907 scope.go:117] "RemoveContainer" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.689681 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" podStartSLOduration=1.8708415729999999 podStartE2EDuration="12.689662218s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.144371244 +0000 UTC m=+702.273653856" lastFinishedPulling="2026-01-27 18:17:37.963191889 +0000 UTC m=+713.092474501" observedRunningTime="2026-01-27 18:17:38.688779333 +0000 UTC m=+713.818061945" watchObservedRunningTime="2026-01-27 18:17:38.689662218 +0000 UTC m=+713.818944830" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.705792 4907 scope.go:117] "RemoveContainer" containerID="b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.715883 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" podStartSLOduration=1.821582716 podStartE2EDuration="12.715863128s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.084356564 +0000 UTC m=+702.213639176" lastFinishedPulling="2026-01-27 18:17:37.978636976 +0000 UTC m=+713.107919588" observedRunningTime="2026-01-27 18:17:38.715692823 +0000 UTC m=+713.844975435" watchObservedRunningTime="2026-01-27 18:17:38.715863128 +0000 UTC m=+713.845145740" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.744702 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podStartSLOduration=2.020214153 podStartE2EDuration="12.744684353s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.255507775 +0000 UTC m=+702.384790387" lastFinishedPulling="2026-01-27 18:17:37.979977975 +0000 UTC m=+713.109260587" observedRunningTime="2026-01-27 18:17:38.742528741 +0000 UTC m=+713.871811363" watchObservedRunningTime="2026-01-27 18:17:38.744684353 +0000 UTC m=+713.873966965" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.762282 4907 scope.go:117] "RemoveContainer" containerID="2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.766988 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.770100 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.786901 4907 scope.go:117] "RemoveContainer" containerID="e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.813821 4907 scope.go:117] "RemoveContainer" containerID="765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.830704 4907 scope.go:117] "RemoveContainer" containerID="76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.850575 4907 scope.go:117] "RemoveContainer" containerID="4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71" Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.639500 4907 generic.go:334] "Generic (PLEG): container finished" podID="ee97e15a-ebc3-4c61-9841-9c1fb43fdee7" containerID="d07173aaf7602b3ab45a4d709aabf77031b98f2ac3561150b4a2a6613ff33c37" exitCode=0 Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.639581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerDied","Data":"d07173aaf7602b3ab45a4d709aabf77031b98f2ac3561150b4a2a6613ff33c37"} Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.768466 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" path="/var/lib/kubelet/pods/a62f5e7d-70be-4705-a4b0-d5e4f531cfde/volumes" Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.650827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"1b63f009934061028d552839eb69f33db38326b6aceebdd43b5123ee37779657"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"387301dd52081a90f0a09b5b30b1f1b3d04ff6b880b9fed6e5a7b30f31d34deb"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651194 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"4962d59ead012d39fb0997d4004e6272420eb60f308afbcef9b7ad45d445a7f1"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"2d0056deb1106a9ef6a8f2c1e1d70bf4263e749d13cea374adf31fed74393fa6"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651215 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"a5565b199cd5a01a375e8dc154be2d539a412d43cf397884107629c889982120"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651223 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"4b387cea73a6c270909f985cd35b570f16e45e4212fc6b5b7fe916047901e582"} Jan 27 18:17:43 crc kubenswrapper[4907]: I0127 18:17:43.685379 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"dd5ebe175d4d5f8e4da250274f3ad24e11a0f2c112e22ab69c4cbe00980c3dd5"} Jan 27 18:17:44 crc kubenswrapper[4907]: I0127 18:17:44.747794 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:44 crc kubenswrapper[4907]: E0127 18:17:44.748366 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.701611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"a264f9c598193abd1c32a961064b3eaf7280e75fc675ed78fb7f62bc4306d43f"} Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.702218 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.739450 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" podStartSLOduration=7.739427181 podStartE2EDuration="7.739427181s" podCreationTimestamp="2026-01-27 18:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:45.733334924 +0000 UTC m=+720.862617536" watchObservedRunningTime="2026-01-27 18:17:45.739427181 +0000 UTC m=+720.868709793" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.761101 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.714678 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.715135 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.807134 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.956249 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.774980 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.776100 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783523 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vvz89" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.794467 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.834617 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.835901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.841261 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dghd4" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.843189 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.930289 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.931859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.933637 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.933746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.956409 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l9jkf" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.964355 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036335 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036474 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.063985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.081579 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.116865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.140120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152279 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152363 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152397 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152450 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podUID="19be711f-36d9-46ae-8f7a-fdba490484da" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.164027 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.197387 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.235877 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.235991 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.236017 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.236078 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.280341 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299311 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299386 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299414 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299468 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725173 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725216 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725217 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726213 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726214 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726325 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775126 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775197 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775220 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775286 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786287 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786360 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786382 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786436 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podUID="19be711f-36d9-46ae-8f7a-fdba490484da" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792479 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792585 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792615 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792673 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:17:56 crc kubenswrapper[4907]: I0127 18:17:56.520947 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:17:56 crc kubenswrapper[4907]: I0127 18:17:56.521955 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:17:58 crc kubenswrapper[4907]: I0127 18:17:58.748194 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.487330 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.487393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"879127d9d3b1234efa50f4870cc4817f5577a6e333c6d3116383007bc83a5960"} Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.747506 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.748531 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789380 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789470 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789500 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789582 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:18:01 crc kubenswrapper[4907]: I0127 18:18:01.747237 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:18:01 crc kubenswrapper[4907]: I0127 18:18:01.748505 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.198881 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.509160 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" event={"ID":"19be711f-36d9-46ae-8f7a-fdba490484da","Type":"ContainerStarted","Data":"19d2d180bea7a3b89fbb0f4692d5b4240257cbe3f322e02ee1d4daba3774b7ba"} Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.748013 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.748738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.976205 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:18:02 crc kubenswrapper[4907]: W0127 18:18:02.981084 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53565dd2_5a29_4ba0_9654_36b9600f765b.slice/crio-956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e WatchSource:0}: Error finding container 956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e: Status 404 returned error can't find the container with id 956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e Jan 27 18:18:03 crc kubenswrapper[4907]: I0127 18:18:03.518282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" event={"ID":"53565dd2-5a29-4ba0-9654-36b9600f765b","Type":"ContainerStarted","Data":"956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e"} Jan 27 18:18:05 crc kubenswrapper[4907]: I0127 18:18:05.533753 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" event={"ID":"19be711f-36d9-46ae-8f7a-fdba490484da","Type":"ContainerStarted","Data":"b9092f590d1e086b8b02772319d398612206e4925a11ceacbe72157f2c8dd81f"} Jan 27 18:18:05 crc kubenswrapper[4907]: I0127 18:18:05.562033 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podStartSLOduration=16.056831649 podStartE2EDuration="18.5620105s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:02.207362034 +0000 UTC m=+737.336644656" lastFinishedPulling="2026-01-27 18:18:04.712540855 +0000 UTC m=+739.841823507" observedRunningTime="2026-01-27 18:18:05.555251845 +0000 UTC m=+740.684534467" watchObservedRunningTime="2026-01-27 18:18:05.5620105 +0000 UTC m=+740.691293112" Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.558747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" event={"ID":"53565dd2-5a29-4ba0-9654-36b9600f765b","Type":"ContainerStarted","Data":"16eb4e3e04684bc396f8b415958a6dfeff3981eeff07496a006170e7acbc673f"} Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.559297 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.592819 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podStartSLOduration=17.197585527 podStartE2EDuration="20.592788919s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:02.983792231 +0000 UTC m=+738.113074843" lastFinishedPulling="2026-01-27 18:18:06.378995603 +0000 UTC m=+741.508278235" observedRunningTime="2026-01-27 18:18:07.588921967 +0000 UTC m=+742.718204639" watchObservedRunningTime="2026-01-27 18:18:07.592788919 +0000 UTC m=+742.722071541" Jan 27 18:18:08 crc kubenswrapper[4907]: I0127 18:18:08.566414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.283532 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.748289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.749511 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.994265 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:18:14 crc kubenswrapper[4907]: W0127 18:18:14.005289 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa35228_e301_48b5_b17b_21694e61ef16.slice/crio-f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c WatchSource:0}: Error finding container f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c: Status 404 returned error can't find the container with id f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c Jan 27 18:18:14 crc kubenswrapper[4907]: I0127 18:18:14.611934 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jslkq" event={"ID":"1fa35228-e301-48b5-b17b-21694e61ef16","Type":"ContainerStarted","Data":"f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c"} Jan 27 18:18:16 crc kubenswrapper[4907]: I0127 18:18:16.630767 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jslkq" event={"ID":"1fa35228-e301-48b5-b17b-21694e61ef16","Type":"ContainerStarted","Data":"aed37ffb5087a21e94cf614aab30edd147b7da31c8cc8cad9d7fb6626440d998"} Jan 27 18:18:16 crc kubenswrapper[4907]: I0127 18:18:16.649761 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jslkq" podStartSLOduration=28.004012081 podStartE2EDuration="29.649739058s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:14.008951996 +0000 UTC m=+749.138234608" lastFinishedPulling="2026-01-27 18:18:15.654678983 +0000 UTC m=+750.783961585" observedRunningTime="2026-01-27 18:18:16.647872844 +0000 UTC m=+751.777155506" watchObservedRunningTime="2026-01-27 18:18:16.649739058 +0000 UTC m=+751.779021680" Jan 27 18:18:20 crc kubenswrapper[4907]: I0127 18:18:20.775443 4907 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:18:26 crc kubenswrapper[4907]: I0127 18:18:26.521702 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:18:26 crc kubenswrapper[4907]: I0127 18:18:26.522713 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.795223 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.797897 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.800108 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.809044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.016410 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.016492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.048576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.118689 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.220328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.222128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.230495 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.400303 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.422777 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.423100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.423721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.525612 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526167 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526832 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526921 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.551513 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.847177 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861252 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="c693cb97ad2ed5d0953eb99d80b718db43fb9e5c380da9ee76f90c687dfa7c0c" exitCode=0 Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"c693cb97ad2ed5d0953eb99d80b718db43fb9e5c380da9ee76f90c687dfa7c0c"} Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerStarted","Data":"dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d"} Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.168797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878396 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="5044a562f55391d794a14dae2b2518e06bf14ea180e2d8482fc61eae6edca11a" exitCode=0 Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"5044a562f55391d794a14dae2b2518e06bf14ea180e2d8482fc61eae6edca11a"} Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878499 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerStarted","Data":"e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb"} Jan 27 18:18:44 crc kubenswrapper[4907]: I0127 18:18:44.888691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"9d06897a8f80857ce60249f6df99fd6e0b16d7b140318e9d9cbfd00c6757b05f"} Jan 27 18:18:44 crc kubenswrapper[4907]: I0127 18:18:44.888643 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="9d06897a8f80857ce60249f6df99fd6e0b16d7b140318e9d9cbfd00c6757b05f" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: E0127 18:18:45.343685 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9834e6_1e3d_42b3_90bf_204c9fa7bb68.slice/crio-607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9834e6_1e3d_42b3_90bf_204c9fa7bb68.slice/crio-conmon-607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.546409 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.548855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.558893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.586688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.587098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.587205 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.687887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688275 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688427 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.689165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.719661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.866805 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.901155 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="43222a645e1281b4851be0fc1194347ebc97a3396243980ab97b51d2ee170f7a" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.901270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"43222a645e1281b4851be0fc1194347ebc97a3396243980ab97b51d2ee170f7a"} Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.912019 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.912077 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.162198 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:46 crc kubenswrapper[4907]: W0127 18:18:46.170498 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod232ce760_7804_4270_9073_256444e355ea.slice/crio-b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e WatchSource:0}: Error finding container b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e: Status 404 returned error can't find the container with id b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.921891 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c" exitCode=0 Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.922058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.922134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.924880 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="67a75734d3f67f5024a57878fe87d33c694c4e4e09c46e83f70fa879a9c5dfb1" exitCode=0 Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.925041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"67a75734d3f67f5024a57878fe87d33c694c4e4e09c46e83f70fa879a9c5dfb1"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.211575 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311532 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311717 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.312779 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle" (OuterVolumeSpecName: "bundle") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.323786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l" (OuterVolumeSpecName: "kube-api-access-hbt6l") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "kube-api-access-hbt6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.333301 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util" (OuterVolumeSpecName: "util") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413368 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413412 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413426 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.935075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939383 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939450 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939472 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.216428 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.329945 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.330170 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.330196 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.331202 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle" (OuterVolumeSpecName: "bundle") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.336062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj" (OuterVolumeSpecName: "kube-api-access-krpcj") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "kube-api-access-krpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.345642 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util" (OuterVolumeSpecName: "util") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432455 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432502 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432519 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.948119 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850" exitCode=0 Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.948210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850"} Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952229 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb"} Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952307 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952264 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:49 crc kubenswrapper[4907]: I0127 18:18:49.962463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027"} Jan 27 18:18:49 crc kubenswrapper[4907]: I0127 18:18:49.984877 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbcj8" podStartSLOduration=2.2095076909999998 podStartE2EDuration="4.984849885s" podCreationTimestamp="2026-01-27 18:18:45 +0000 UTC" firstStartedPulling="2026-01-27 18:18:46.924444882 +0000 UTC m=+782.053727494" lastFinishedPulling="2026-01-27 18:18:49.699787076 +0000 UTC m=+784.829069688" observedRunningTime="2026-01-27 18:18:49.984270768 +0000 UTC m=+785.113553410" watchObservedRunningTime="2026-01-27 18:18:49.984849885 +0000 UTC m=+785.114132537" Jan 27 18:18:55 crc kubenswrapper[4907]: I0127 18:18:55.867334 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:55 crc kubenswrapper[4907]: I0127 18:18:55.868152 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.521647 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.522436 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.522620 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.523627 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.523850 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" gracePeriod=600 Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.918665 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbcj8" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" probeResult="failure" output=< Jan 27 18:18:56 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:18:56 crc kubenswrapper[4907]: > Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015267 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" exitCode=0 Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015342 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015385 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015409 4907 scope.go:117] "RemoveContainer" containerID="ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.150957 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151718 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151734 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151758 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151766 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151780 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151789 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151801 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151808 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151816 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151822 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151841 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151848 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152003 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152022 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152822 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156203 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156248 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156704 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156907 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-r7dgl" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.157327 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.173065 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.194128 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318115 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420604 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420627 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420684 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.423188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.435362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.443366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.447858 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.451977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.470191 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.920510 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: W0127 18:18:59.931258 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6347c63b_e1fb_4570_a350_68a9f9f1b79b.slice/crio-242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5 WatchSource:0}: Error finding container 242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5: Status 404 returned error can't find the container with id 242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5 Jan 27 18:19:00 crc kubenswrapper[4907]: I0127 18:19:00.042674 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5"} Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.300816 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.303077 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.310833 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.311335 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zlzvs" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.311547 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.352256 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.398536 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.500805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.530431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.633349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:04 crc kubenswrapper[4907]: I0127 18:19:04.061215 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:04 crc kubenswrapper[4907]: I0127 18:19:04.079451 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" event={"ID":"1f119aff-6ff6-4393-b7d5-19a981e50f3c","Type":"ContainerStarted","Data":"166a42432633595c547ac0c203b7f6d63cae1ab66ad2c3094983ba0c2555bdae"} Jan 27 18:19:05 crc kubenswrapper[4907]: I0127 18:19:05.919441 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:05 crc kubenswrapper[4907]: I0127 18:19:05.995484 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:08 crc kubenswrapper[4907]: I0127 18:19:08.756225 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:08 crc kubenswrapper[4907]: I0127 18:19:08.757380 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sbcj8" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" containerID="cri-o://262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" gracePeriod=2 Jan 27 18:19:09 crc kubenswrapper[4907]: I0127 18:19:09.138806 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" exitCode=0 Jan 27 18:19:09 crc kubenswrapper[4907]: I0127 18:19:09.138901 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027"} Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.924232 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961235 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961316 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.963111 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities" (OuterVolumeSpecName: "utilities") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.976858 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs" (OuterVolumeSpecName: "kube-api-access-87sfs") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "kube-api-access-87sfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.063768 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.064183 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.102376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.163308 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" event={"ID":"1f119aff-6ff6-4393-b7d5-19a981e50f3c","Type":"ContainerStarted","Data":"c983e9e44dd96374cfc020735027a4ddabc7d69698a655d7672ce298a3845248"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.165548 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.172367 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.172420 4907 scope.go:117] "RemoveContainer" containerID="262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.173632 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.176641 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.206869 4907 scope.go:117] "RemoveContainer" containerID="0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.232645 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" podStartSLOduration=1.6616893259999999 podStartE2EDuration="8.232625126s" podCreationTimestamp="2026-01-27 18:19:03 +0000 UTC" firstStartedPulling="2026-01-27 18:19:04.072709094 +0000 UTC m=+799.201991706" lastFinishedPulling="2026-01-27 18:19:10.643644904 +0000 UTC m=+805.772927506" observedRunningTime="2026-01-27 18:19:11.207043208 +0000 UTC m=+806.336325820" watchObservedRunningTime="2026-01-27 18:19:11.232625126 +0000 UTC m=+806.361907738" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.244792 4907 scope.go:117] "RemoveContainer" containerID="8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.245165 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.255545 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.772337 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232ce760-7804-4270-9073-256444e355ea" path="/var/lib/kubelet/pods/232ce760-7804-4270-9073-256444e355ea/volumes" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.247437 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"fb1e2b89a26f7a786ddbaa85c7b6ba998e780bea2a3765880fc6333733e9e8a8"} Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.248380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.250753 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.277258 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podStartSLOduration=1.720722765 podStartE2EDuration="19.277228672s" podCreationTimestamp="2026-01-27 18:18:59 +0000 UTC" firstStartedPulling="2026-01-27 18:18:59.933215071 +0000 UTC m=+795.062497683" lastFinishedPulling="2026-01-27 18:19:17.489720978 +0000 UTC m=+812.619003590" observedRunningTime="2026-01-27 18:19:18.272882966 +0000 UTC m=+813.402165628" watchObservedRunningTime="2026-01-27 18:19:18.277228672 +0000 UTC m=+813.406511334" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.172505 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173703 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173737 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173751 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-content" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173758 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-content" Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173778 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-utilities" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173785 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-utilities" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173923 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.174582 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.176627 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.176789 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.180834 4907 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-hjd6j" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.191055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.212325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.212432 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.314485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.314642 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.319511 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.319590 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6daae88d10a28a27886d92c3dc3e6bcc3af2dddc5a85e66444be228d182862ed/globalmount\"" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.340808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.373714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.494930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.967726 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 18:19:24 crc kubenswrapper[4907]: I0127 18:19:24.295021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09a57c24-4f8a-4799-82b4-1310608086fa","Type":"ContainerStarted","Data":"1206abe6204b2eb0954982611d9a9219709aaef25f2f61a5b0f2a0f4aeba9ec1"} Jan 27 18:19:27 crc kubenswrapper[4907]: I0127 18:19:27.321125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09a57c24-4f8a-4799-82b4-1310608086fa","Type":"ContainerStarted","Data":"5f444665a79042371164241f9c56bdd93dc691f31aef2fc2d5d77141a9740b7a"} Jan 27 18:19:27 crc kubenswrapper[4907]: I0127 18:19:27.339386 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.203273228 podStartE2EDuration="7.339366056s" podCreationTimestamp="2026-01-27 18:19:20 +0000 UTC" firstStartedPulling="2026-01-27 18:19:23.984290666 +0000 UTC m=+819.113573278" lastFinishedPulling="2026-01-27 18:19:27.120383474 +0000 UTC m=+822.249666106" observedRunningTime="2026-01-27 18:19:27.337518122 +0000 UTC m=+822.466800734" watchObservedRunningTime="2026-01-27 18:19:27.339366056 +0000 UTC m=+822.468648668" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.031325 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.032862 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.035624 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-kc9zx" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.037313 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.037783 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.041810 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.046475 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.051659 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194206 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194384 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.202620 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.203771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.206831 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.206856 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.209893 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.219702 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.295920 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296021 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296089 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296140 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296178 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.298362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.299317 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.306923 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.307766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.319616 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.320473 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.322081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.325509 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.327896 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.351701 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.352164 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406678 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406796 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406848 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406869 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406948 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406971 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.407013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.469338 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.472669 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.476122 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.476468 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-5txtb" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480054 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480420 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480546 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480693 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.495133 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.496632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.505191 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519763 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519930 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519938 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520107 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520141 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520184 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520219 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.524197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.527110 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.529229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.530027 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.532051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.532064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.535833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.536445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.537099 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.560612 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.566545 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.621805 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622133 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622264 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622338 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622595 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.623917 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624003 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624060 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624191 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624564 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.692438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725343 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725579 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725627 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726500 4907 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726608 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret podName:d57b015c-f3fc-424d-b910-96e63c6da31a nodeName:}" failed. No retries permitted until 2026-01-27 18:19:33.226586376 +0000 UTC m=+828.355868988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret") pod "logging-loki-gateway-795ff9d55b-mwm5k" (UID: "d57b015c-f3fc-424d-b910-96e63c6da31a") : secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726682 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726810 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726905 4907 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726938 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret podName:faf9da31-9bbb-43b4-9cc1-a80f95392ccf nodeName:}" failed. No retries permitted until 2026-01-27 18:19:33.226930026 +0000 UTC m=+828.356212638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret") pod "logging-loki-gateway-795ff9d55b-njxl9" (UID: "faf9da31-9bbb-43b4-9cc1-a80f95392ccf") : secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726965 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727072 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727705 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727809 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.728481 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.728737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.737925 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.744202 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.745219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.745867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.746515 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.747201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.749373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.820219 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.938517 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.050338 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.222680 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.224923 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.237919 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.239273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.239371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.248522 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.250728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.253248 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.265810 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.317022 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.318731 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.321849 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.323244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.324114 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341742 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341794 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341827 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341903 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341923 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.342002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.374272 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.393873 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.394476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" event={"ID":"70874c1f-da0d-4389-8021-fd3003150fff","Type":"ContainerStarted","Data":"d673e719e31fdd890b64083c125e558045e8d73c0edc9b3ceab510c99da31a04"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.401049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" event={"ID":"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542","Type":"ContainerStarted","Data":"efde6641e3402f3e93ef12c37377589e9791c9f9d73577e58fbe0df9b7e2b504"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.403255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" event={"ID":"8f62d8a1-62d1-4206-b061-f75c44ff2450","Type":"ContainerStarted","Data":"c42dde8c073c678dad062c43250215a91efba2a5a88761ae559d49bdfa0cc2c7"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.408778 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.410289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.415200 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.415469 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.427312 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.443499 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.443629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.444938 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445083 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445132 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445204 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445229 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445297 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445390 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445427 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.447784 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.452978 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.453184 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ccc690943db56f0ff69e57c33c7a66f3c532dfa1618c3e0713165343c19787a9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.453957 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.459111 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.461476 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.461544 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b90cd83c2d852d1e2674df5af163b9c3d8362e5073e72af605774f87d64e892f/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.463734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.464253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.464774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.491383 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.502252 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546968 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546993 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547075 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547099 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547129 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547175 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547207 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547229 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.549046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.551956 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553334 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553504 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553543 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99d78d22969a0c3dbf26b0f48c3c2ad420f6ccda5c59f1119c34ac875b1b6e7f/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.554123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.557495 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.564939 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.575010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.597701 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648116 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648206 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648301 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648398 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.650482 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.650873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.653836 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.655518 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.655540 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e85ec94bd0636207f226e146b2f799242e2ac55c6d7e547e5ea0a6a5c7b9b6c6/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.656982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.659063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.672232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.685226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.704685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.807023 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.956284 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.999045 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.004347 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf9da31_9bbb_43b4_9cc1_a80f95392ccf.slice/crio-caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529 WatchSource:0}: Error finding container caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529: Status 404 returned error can't find the container with id caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529 Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.047734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.051343 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57b015c_f3fc_424d_b910_96e63c6da31a.slice/crio-d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31 WatchSource:0}: Error finding container d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31: Status 404 returned error can't find the container with id d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31 Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.084497 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2448dad5_d0f7_4335_a3fb_a23c5ef59bbf.slice/crio-fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843 WatchSource:0}: Error finding container fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843: Status 404 returned error can't find the container with id fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843 Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.085892 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.379655 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.415038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.417056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.419109 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2","Type":"ContainerStarted","Data":"057cda60221ca368ed15ef39acc10b01864652a5b999174af6ea352ff0dae47b"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.420458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf","Type":"ContainerStarted","Data":"fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.422677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a9dc6389-0ad3-4259-aaf2-945493e66aa2","Type":"ContainerStarted","Data":"feac54ef90d07c99482c20fa97ec9ab8574866da10d6ac8526e21ba4e439bd0d"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.470310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" event={"ID":"8f62d8a1-62d1-4206-b061-f75c44ff2450","Type":"ContainerStarted","Data":"f83c32ae9c0c6afb94fc6ec1f7e91a867355d74476c7c65fcc7cac0b83fecf85"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.472136 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.474161 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" event={"ID":"70874c1f-da0d-4389-8021-fd3003150fff","Type":"ContainerStarted","Data":"c85a941bd03ea248f5abdb09c90c9de2104f44a5f567cd252cadeb077a3e0255"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.474771 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.477702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" event={"ID":"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542","Type":"ContainerStarted","Data":"3b8c71675ff7a39e19d56b4367412719dcd2aca6c99d5932c4074db781c0db9b"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.478246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.479816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2","Type":"ContainerStarted","Data":"628096152c42ecc3f5abe46b0bcb43ce9dc5cf31388d2b6e8934e2f3767a0f9b"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.480418 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.481771 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf","Type":"ContainerStarted","Data":"b2aa753307f2618fbffb81bc7e29e603b6cd8f43925033a06ded1fe06c58ff19"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.482327 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.508043 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podStartSLOduration=2.493857602 podStartE2EDuration="5.50801558s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:33.372158892 +0000 UTC m=+828.501441504" lastFinishedPulling="2026-01-27 18:19:36.38631686 +0000 UTC m=+831.515599482" observedRunningTime="2026-01-27 18:19:37.492664437 +0000 UTC m=+832.621947069" watchObservedRunningTime="2026-01-27 18:19:37.50801558 +0000 UTC m=+832.637298182" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.515118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a9dc6389-0ad3-4259-aaf2-945493e66aa2","Type":"ContainerStarted","Data":"f339e45730c774c02834a2c8b251a0ea3a4e6ee3f94ebf708e9bde61436a86ea"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.515589 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.523911 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podStartSLOduration=1.893670965 podStartE2EDuration="5.523890978s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:33.054623635 +0000 UTC m=+828.183906257" lastFinishedPulling="2026-01-27 18:19:36.684843658 +0000 UTC m=+831.814126270" observedRunningTime="2026-01-27 18:19:37.514337303 +0000 UTC m=+832.643619915" watchObservedRunningTime="2026-01-27 18:19:37.523890978 +0000 UTC m=+832.653173590" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.536230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.026912889 podStartE2EDuration="5.536202774s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.086732949 +0000 UTC m=+829.216015561" lastFinishedPulling="2026-01-27 18:19:36.596022834 +0000 UTC m=+831.725305446" observedRunningTime="2026-01-27 18:19:37.534086223 +0000 UTC m=+832.663368835" watchObservedRunningTime="2026-01-27 18:19:37.536202774 +0000 UTC m=+832.665485386" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.561187 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.129414238 podStartE2EDuration="5.561148994s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.00258238 +0000 UTC m=+829.131864992" lastFinishedPulling="2026-01-27 18:19:36.434317106 +0000 UTC m=+831.563599748" observedRunningTime="2026-01-27 18:19:37.55650057 +0000 UTC m=+832.685783192" watchObservedRunningTime="2026-01-27 18:19:37.561148994 +0000 UTC m=+832.690431606" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.591297 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podStartSLOduration=2.17683583 podStartE2EDuration="5.591262503s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:32.950602223 +0000 UTC m=+828.079884835" lastFinishedPulling="2026-01-27 18:19:36.365028896 +0000 UTC m=+831.494311508" observedRunningTime="2026-01-27 18:19:37.578253288 +0000 UTC m=+832.707535910" watchObservedRunningTime="2026-01-27 18:19:37.591262503 +0000 UTC m=+832.720545125" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.603311 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.3130834399999998 podStartE2EDuration="5.60328941s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.388221592 +0000 UTC m=+829.517504204" lastFinishedPulling="2026-01-27 18:19:36.678427562 +0000 UTC m=+831.807710174" observedRunningTime="2026-01-27 18:19:37.600992164 +0000 UTC m=+832.730274776" watchObservedRunningTime="2026-01-27 18:19:37.60328941 +0000 UTC m=+832.732572022" Jan 27 18:19:38 crc kubenswrapper[4907]: I0127 18:19:38.525406 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"9fb9edcfa7e6be0446bac646702dd512220d81b7c1403076c08933a25eb3ac09"} Jan 27 18:19:38 crc kubenswrapper[4907]: I0127 18:19:38.527849 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"a85acb993e555d7a09c2f5f4c314467ba64332a42535476888d3ab36570ae963"} Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.546808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"ba77dc4cbb0aba628360200d4ea5f2cec320a2ff6c85474ef9b6f3f64b7fc4ca"} Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.547991 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.548078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.549668 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": dial tcp 10.217.0.50:8083: connect: connection refused" start-of-body= Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.549759 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": dial tcp 10.217.0.50:8083: connect: connection refused" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.572331 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.591394 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podStartSLOduration=2.383150046 podStartE2EDuration="8.591371256s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.056754964 +0000 UTC m=+829.186037576" lastFinishedPulling="2026-01-27 18:19:40.264976174 +0000 UTC m=+835.394258786" observedRunningTime="2026-01-27 18:19:40.583217291 +0000 UTC m=+835.712499903" watchObservedRunningTime="2026-01-27 18:19:40.591371256 +0000 UTC m=+835.720653888" Jan 27 18:19:41 crc kubenswrapper[4907]: I0127 18:19:41.576283 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.366823 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.703507 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.829958 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.583769 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.583862 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.716057 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.823787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:20:03 crc kubenswrapper[4907]: I0127 18:20:03.580306 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 18:20:03 crc kubenswrapper[4907]: I0127 18:20:03.581213 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.833640 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"b2a5a65f14b1bc2e166f1c286f3c1bfb74a3d9022a08bbd596d0fdb92264a1ff"} Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.834551 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.834593 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.849310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.854651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.867469 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podStartSLOduration=2.755317419 podStartE2EDuration="38.867443987s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.007870613 +0000 UTC m=+829.137153225" lastFinishedPulling="2026-01-27 18:20:10.119997141 +0000 UTC m=+865.249279793" observedRunningTime="2026-01-27 18:20:10.860097595 +0000 UTC m=+865.989380207" watchObservedRunningTime="2026-01-27 18:20:10.867443987 +0000 UTC m=+865.996726609" Jan 27 18:20:13 crc kubenswrapper[4907]: I0127 18:20:13.581449 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 18:20:13 crc kubenswrapper[4907]: I0127 18:20:13.582024 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.394576 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.397311 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.409258 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548403 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548502 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548549 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.650933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651058 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651549 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.674463 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.726434 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.326826 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:23 crc kubenswrapper[4907]: W0127 18:20:23.332866 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d4cec2_3672_4d27_aac3_0e29e9f913aa.slice/crio-23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10 WatchSource:0}: Error finding container 23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10: Status 404 returned error can't find the container with id 23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10 Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.580012 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.580578 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971399 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" exitCode=0 Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24"} Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10"} Jan 27 18:20:24 crc kubenswrapper[4907]: I0127 18:20:24.980587 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992333 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" exitCode=0 Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} Jan 27 18:20:26 crc kubenswrapper[4907]: I0127 18:20:26.021693 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptc2v" podStartSLOduration=2.605695844 podStartE2EDuration="4.021654558s" podCreationTimestamp="2026-01-27 18:20:22 +0000 UTC" firstStartedPulling="2026-01-27 18:20:23.974226736 +0000 UTC m=+879.103509348" lastFinishedPulling="2026-01-27 18:20:25.39018545 +0000 UTC m=+880.519468062" observedRunningTime="2026-01-27 18:20:26.015316435 +0000 UTC m=+881.144599047" watchObservedRunningTime="2026-01-27 18:20:26.021654558 +0000 UTC m=+881.150937190" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.727411 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.729749 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.799920 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.114793 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.171690 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.583193 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.083114 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptc2v" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" containerID="cri-o://d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" gracePeriod=2 Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.724771 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.824780 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.824883 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.825037 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.825916 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities" (OuterVolumeSpecName: "utilities") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.839506 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf" (OuterVolumeSpecName: "kube-api-access-r26tf") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "kube-api-access-r26tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.927253 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.927303 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.031602 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093246 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" exitCode=0 Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093340 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10"} Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093363 4907 scope.go:117] "RemoveContainer" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093501 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.130617 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.130982 4907 scope.go:117] "RemoveContainer" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.134073 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.142126 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.151414 4907 scope.go:117] "RemoveContainer" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.191589 4907 scope.go:117] "RemoveContainer" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.191970 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": container with ID starting with d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c not found: ID does not exist" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192009 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} err="failed to get container status \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": rpc error: code = NotFound desc = could not find container \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": container with ID starting with d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c not found: ID does not exist" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192036 4907 scope.go:117] "RemoveContainer" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.192655 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": container with ID starting with 060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478 not found: ID does not exist" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192690 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} err="failed to get container status \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": rpc error: code = NotFound desc = could not find container \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": container with ID starting with 060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478 not found: ID does not exist" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192703 4907 scope.go:117] "RemoveContainer" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.193070 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": container with ID starting with 310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24 not found: ID does not exist" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.193096 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24"} err="failed to get container status \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": rpc error: code = NotFound desc = could not find container \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": container with ID starting with 310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24 not found: ID does not exist" Jan 27 18:20:37 crc kubenswrapper[4907]: I0127 18:20:37.762791 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" path="/var/lib/kubelet/pods/e6d4cec2-3672-4d27-aac3-0e29e9f913aa/volumes" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.263255 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264398 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-content" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264483 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-content" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264500 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264509 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264547 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-utilities" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264637 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-utilities" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264832 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.265664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.268851 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.272677 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.272997 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-npdhl" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.273154 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.273462 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.282352 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.297361 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307467 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307584 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307607 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307734 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.342628 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.343266 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zn9h8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-q5b5q" podUID="29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409283 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409341 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409506 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409537 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409628 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.410313 4907 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.410430 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver podName:29bf2bd0-d34f-4ca1-b84f-bb7a003039f4 nodeName:}" failed. No retries permitted until 2026-01-27 18:20:52.910404628 +0000 UTC m=+908.039687240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver") pod "collector-q5b5q" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4") : secret "collector-syslog-receiver" not found Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.410549 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.410341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.411491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.411580 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.427689 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.432409 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.438350 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.438734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.439336 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.918412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.921626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.246524 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.265138 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427604 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427703 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427753 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir" (OuterVolumeSpecName: "datadir") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427833 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427892 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config" (OuterVolumeSpecName: "config") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428650 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428710 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428716 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428766 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428809 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428843 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428805 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429489 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429521 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429537 4907 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429572 4907 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429590 4907 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.431911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.432212 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics" (OuterVolumeSpecName: "metrics") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.432684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8" (OuterVolumeSpecName: "kube-api-access-zn9h8") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "kube-api-access-zn9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.433054 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token" (OuterVolumeSpecName: "sa-token") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.433286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token" (OuterVolumeSpecName: "collector-token") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.434050 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp" (OuterVolumeSpecName: "tmp") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531113 4907 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531510 4907 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531626 4907 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531715 4907 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531805 4907 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531886 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.256261 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.300013 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.315116 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.321465 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.322682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.325391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.325813 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-npdhl" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.326302 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.326497 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.332642 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.337645 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.355449 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449901 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449957 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450039 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450107 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450246 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450304 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450385 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552512 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.553386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.553831 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554612 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556129 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.558182 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.569242 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.574766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.580842 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.580899 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.659123 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2bmhz" Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.128656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.268945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2bmhz" event={"ID":"e66fb20d-fb54-4964-9fb8-0ca14b94f895","Type":"ContainerStarted","Data":"083145679fa1ab5e2ff32c39e686a219dfbbcf851f08a4ce0ab950503451880b"} Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.763150 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" path="/var/lib/kubelet/pods/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4/volumes" Jan 27 18:20:56 crc kubenswrapper[4907]: I0127 18:20:56.521766 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:20:56 crc kubenswrapper[4907]: I0127 18:20:56.522103 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:02 crc kubenswrapper[4907]: I0127 18:21:02.335463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2bmhz" event={"ID":"e66fb20d-fb54-4964-9fb8-0ca14b94f895","Type":"ContainerStarted","Data":"5743ba0e03537f31e433c3a8183e7e1b98e4307f1bd6dc7dee8a350f54b02297"} Jan 27 18:21:02 crc kubenswrapper[4907]: I0127 18:21:02.370107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-2bmhz" podStartSLOduration=1.8599209349999999 podStartE2EDuration="8.370082826s" podCreationTimestamp="2026-01-27 18:20:54 +0000 UTC" firstStartedPulling="2026-01-27 18:20:55.114255302 +0000 UTC m=+910.243537914" lastFinishedPulling="2026-01-27 18:21:01.624417193 +0000 UTC m=+916.753699805" observedRunningTime="2026-01-27 18:21:02.368521981 +0000 UTC m=+917.497804603" watchObservedRunningTime="2026-01-27 18:21:02.370082826 +0000 UTC m=+917.499365448" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.201401 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.215231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.220357 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.413266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.413935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.414092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515693 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.516594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.516891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.541853 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.550929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.047541 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561119 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" exitCode=0 Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d"} Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"698b12fed85423f3f2113a134a20e1d4960384ade0e7258431251b0d5a827251"} Jan 27 18:21:25 crc kubenswrapper[4907]: I0127 18:21:25.572570 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.521740 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.522130 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.583380 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" exitCode=0 Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.583447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} Jan 27 18:21:27 crc kubenswrapper[4907]: I0127 18:21:27.592913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} Jan 27 18:21:27 crc kubenswrapper[4907]: I0127 18:21:27.618121 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hzcvt" podStartSLOduration=2.190707799 podStartE2EDuration="4.618101996s" podCreationTimestamp="2026-01-27 18:21:23 +0000 UTC" firstStartedPulling="2026-01-27 18:21:24.564162878 +0000 UTC m=+939.693445490" lastFinishedPulling="2026-01-27 18:21:26.991557035 +0000 UTC m=+942.120839687" observedRunningTime="2026-01-27 18:21:27.611677051 +0000 UTC m=+942.740959663" watchObservedRunningTime="2026-01-27 18:21:27.618101996 +0000 UTC m=+942.747384598" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.551715 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.552627 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.621057 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.718663 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.873910 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.332681 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.334404 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.336551 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.347715 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437356 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437614 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539604 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.540134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.540185 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.563565 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.652580 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.657642 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hzcvt" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" containerID="cri-o://cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" gracePeriod=2 Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.132304 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257738 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257774 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.259001 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities" (OuterVolumeSpecName: "utilities") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.278886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8" (OuterVolumeSpecName: "kube-api-access-mmmp8") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "kube-api-access-mmmp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.306808 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.314311 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359453 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359488 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359499 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666773 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" exitCode=0 Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666844 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666888 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.667206 4907 scope.go:117] "RemoveContainer" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.667184 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"698b12fed85423f3f2113a134a20e1d4960384ade0e7258431251b0d5a827251"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.668631 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerStarted","Data":"2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.687115 4907 scope.go:117] "RemoveContainer" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.722598 4907 scope.go:117] "RemoveContainer" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.736712 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.748819 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.752302 4907 scope.go:117] "RemoveContainer" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.752957 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": container with ID starting with cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264 not found: ID does not exist" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.753154 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} err="failed to get container status \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": rpc error: code = NotFound desc = could not find container \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": container with ID starting with cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264 not found: ID does not exist" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.753259 4907 scope.go:117] "RemoveContainer" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.754051 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": container with ID starting with 5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666 not found: ID does not exist" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754103 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} err="failed to get container status \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": rpc error: code = NotFound desc = could not find container \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": container with ID starting with 5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666 not found: ID does not exist" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754132 4907 scope.go:117] "RemoveContainer" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.754377 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": container with ID starting with a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d not found: ID does not exist" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754405 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d"} err="failed to get container status \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": rpc error: code = NotFound desc = could not find container \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": container with ID starting with a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d not found: ID does not exist" Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.678296 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="a304ea0c599d1f5ce44ba39cc0f5f4d71173bd08485fbd83f5766b2c9d27a3f0" exitCode=0 Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.678344 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"a304ea0c599d1f5ce44ba39cc0f5f4d71173bd08485fbd83f5766b2c9d27a3f0"} Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.758341 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" path="/var/lib/kubelet/pods/da2a8ac5-42c7-4326-aef8-b7f713af971d/volumes" Jan 27 18:21:39 crc kubenswrapper[4907]: I0127 18:21:39.697375 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="762fd705ef8f5be28eb2f612171e0e1d8edf46304f898d2651b11fccee0a5d69" exitCode=0 Jan 27 18:21:39 crc kubenswrapper[4907]: I0127 18:21:39.697489 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"762fd705ef8f5be28eb2f612171e0e1d8edf46304f898d2651b11fccee0a5d69"} Jan 27 18:21:40 crc kubenswrapper[4907]: I0127 18:21:40.710160 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="604979c1cc096c0e7084885f113274d201082a4fdf5604b1a1b11c4589c56282" exitCode=0 Jan 27 18:21:40 crc kubenswrapper[4907]: I0127 18:21:40.710256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"604979c1cc096c0e7084885f113274d201082a4fdf5604b1a1b11c4589c56282"} Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.076787 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.175972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.176035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.176144 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.177183 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle" (OuterVolumeSpecName: "bundle") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.189910 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j" (OuterVolumeSpecName: "kube-api-access-mpd8j") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "kube-api-access-mpd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.210169 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util" (OuterVolumeSpecName: "util") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278761 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278841 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278870 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672"} Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730639 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730716 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.375393 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376028 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="util" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376045 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="util" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376073 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-utilities" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376082 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-utilities" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376092 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376101 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376113 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376121 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376138 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-content" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376145 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-content" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376167 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="pull" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376175 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="pull" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376349 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376370 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.377018 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381741 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-trkrl" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381741 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381802 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.397013 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.538471 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.640421 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.662066 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.673097 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.690080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.701422 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-trkrl" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.710550 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:46 crc kubenswrapper[4907]: I0127 18:21:46.204068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:46 crc kubenswrapper[4907]: I0127 18:21:46.771900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" event={"ID":"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b","Type":"ContainerStarted","Data":"218fc0101b700e17b626492d02594460f79d43d669615d94942a61dedd455251"} Jan 27 18:21:49 crc kubenswrapper[4907]: I0127 18:21:49.794932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" event={"ID":"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b","Type":"ContainerStarted","Data":"9a87c7f81bc2e2e6d5c8e8e41952cf0cf2616476d46cb2531812a9745472eb2f"} Jan 27 18:21:49 crc kubenswrapper[4907]: I0127 18:21:49.817978 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" podStartSLOduration=2.287954051 podStartE2EDuration="4.817955133s" podCreationTimestamp="2026-01-27 18:21:45 +0000 UTC" firstStartedPulling="2026-01-27 18:21:46.220585595 +0000 UTC m=+961.349868257" lastFinishedPulling="2026-01-27 18:21:48.750586727 +0000 UTC m=+963.879869339" observedRunningTime="2026-01-27 18:21:49.813242037 +0000 UTC m=+964.942524639" watchObservedRunningTime="2026-01-27 18:21:49.817955133 +0000 UTC m=+964.947237745" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.437988 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.440228 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.443887 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vdvgh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.454793 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.455906 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.490681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.509818 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521654 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521721 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521775 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.522466 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.522530 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" gracePeriod=600 Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.530631 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wz5df"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.532086 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.537535 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.592938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.593437 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.593483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.626787 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.627933 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.635976 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.636348 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dtwbg" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.636521 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.676455 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698814 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698897 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.699026 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.699132 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.723358 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.738431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.738809 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800602 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800900 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801038 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801060 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801195 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801550 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.816037 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.827947 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.869629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.870214 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890852 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" exitCode=0 Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890915 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890956 4907 scope.go:117] "RemoveContainer" containerID="6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907176 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.908522 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.916807 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.917803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.922401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.954612 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.974539 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.984656 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113202 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113522 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113568 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113594 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113653 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216430 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216487 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216532 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216569 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216599 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216616 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.218113 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.219348 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.220109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.221411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.222352 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.222447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.240362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.263287 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.466157 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.517302 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.524084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:57 crc kubenswrapper[4907]: W0127 18:21:57.526383 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3336bb0_ef0d_47f3_b3c7_de266154f20e.slice/crio-374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6 WatchSource:0}: Error finding container 374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6: Status 404 returned error can't find the container with id 374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6 Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.729113 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.903914 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerStarted","Data":"d8e779a538fd171e62688ea894409db54be158536da7dede33422f76801c0085"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.905774 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" event={"ID":"c53f2859-15de-4c57-81ba-539c7787b649","Type":"ContainerStarted","Data":"9c4f090d3d1772eacb63bc8cc2c4d88b15585015e82ffaded6691f0b0cab40a8"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.907196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" event={"ID":"d3336bb0-ef0d-47f3-b3c7-de266154f20e","Type":"ContainerStarted","Data":"374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.909859 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.911417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"c51966cbf8d7df3679571580c8c1541d61f7f67978c0f9093aa8098e39e0f850"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.913855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wz5df" event={"ID":"0b5adf10-ea9c-48b5-bece-3ee8683423e3","Type":"ContainerStarted","Data":"1bea98502944fad4c9e28c5ce050a438e4ff06eb2b47f57f5f0a3b24e88df233"} Jan 27 18:21:58 crc kubenswrapper[4907]: I0127 18:21:58.944647 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerStarted","Data":"73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45"} Jan 27 18:21:58 crc kubenswrapper[4907]: I0127 18:21:58.979828 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dccccccb-km74l" podStartSLOduration=2.979806679 podStartE2EDuration="2.979806679s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:21:58.974799985 +0000 UTC m=+974.104082597" watchObservedRunningTime="2026-01-27 18:21:58.979806679 +0000 UTC m=+974.109089291" Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.958790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"c3500d0cfc2ae6512c39749f3a8b9a88ce3305913cb344b157a8fa7612d61968"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.961580 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" event={"ID":"c53f2859-15de-4c57-81ba-539c7787b649","Type":"ContainerStarted","Data":"8474efa2db5d68150ca85bb9f44c99eca252b89372397e5d03f96f2817940286"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.961728 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.963050 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" event={"ID":"d3336bb0-ef0d-47f3-b3c7-de266154f20e","Type":"ContainerStarted","Data":"3b008bfec8c68ce8c0345a2757db7fc91355b901a4743fc1bc7e1f35303d6af4"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.988919 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podStartSLOduration=1.919284732 podStartE2EDuration="4.988895343s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.513865559 +0000 UTC m=+972.643148171" lastFinishedPulling="2026-01-27 18:22:00.58347617 +0000 UTC m=+975.712758782" observedRunningTime="2026-01-27 18:22:00.984929748 +0000 UTC m=+976.114212350" watchObservedRunningTime="2026-01-27 18:22:00.988895343 +0000 UTC m=+976.118177955" Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.003651 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" podStartSLOduration=1.950305028 podStartE2EDuration="5.003630449s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.528026198 +0000 UTC m=+972.657308810" lastFinishedPulling="2026-01-27 18:22:00.581351579 +0000 UTC m=+975.710634231" observedRunningTime="2026-01-27 18:22:01.002732053 +0000 UTC m=+976.132014675" watchObservedRunningTime="2026-01-27 18:22:01.003630449 +0000 UTC m=+976.132913061" Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.974123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wz5df" event={"ID":"0b5adf10-ea9c-48b5-bece-3ee8683423e3","Type":"ContainerStarted","Data":"28f3efc7fa28430e988f663f6384aeab8a491fa85cf4be6af123fc71a6e82338"} Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.993311 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wz5df" podStartSLOduration=2.369403916 podStartE2EDuration="5.99328676s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:56.982999512 +0000 UTC m=+972.112282114" lastFinishedPulling="2026-01-27 18:22:00.606882336 +0000 UTC m=+975.736164958" observedRunningTime="2026-01-27 18:22:01.987252936 +0000 UTC m=+977.116535548" watchObservedRunningTime="2026-01-27 18:22:01.99328676 +0000 UTC m=+977.122569412" Jan 27 18:22:02 crc kubenswrapper[4907]: I0127 18:22:02.985796 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:22:03 crc kubenswrapper[4907]: I0127 18:22:03.993001 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"4bc09389b686797c47fe71efee2c090f86632624510af7f22d52c1d4d4e555cf"} Jan 27 18:22:04 crc kubenswrapper[4907]: I0127 18:22:04.021917 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" podStartSLOduration=1.877805853 podStartE2EDuration="8.021896226s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.454585446 +0000 UTC m=+972.583868058" lastFinishedPulling="2026-01-27 18:22:03.598675819 +0000 UTC m=+978.727958431" observedRunningTime="2026-01-27 18:22:04.01544283 +0000 UTC m=+979.144725442" watchObservedRunningTime="2026-01-27 18:22:04.021896226 +0000 UTC m=+979.151178848" Jan 27 18:22:06 crc kubenswrapper[4907]: I0127 18:22:06.905339 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.263780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.264000 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.270286 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.718112 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.720647 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.738829 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827535 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.929328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.931063 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.954600 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.027890 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.079468 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.157695 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.724834 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:08 crc kubenswrapper[4907]: W0127 18:22:08.727634 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe6c326_a67b_4381_bdfa_8716d5caf5c8.slice/crio-9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a WatchSource:0}: Error finding container 9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a: Status 404 returned error can't find the container with id 9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028168 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" exitCode=0 Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf"} Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a"} Jan 27 18:22:10 crc kubenswrapper[4907]: I0127 18:22:10.040949 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.055052 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" exitCode=0 Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.055176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.058093 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:22:12 crc kubenswrapper[4907]: I0127 18:22:12.068112 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} Jan 27 18:22:12 crc kubenswrapper[4907]: I0127 18:22:12.093014 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wntx4" podStartSLOduration=2.668965401 podStartE2EDuration="5.092992722s" podCreationTimestamp="2026-01-27 18:22:07 +0000 UTC" firstStartedPulling="2026-01-27 18:22:09.029891618 +0000 UTC m=+984.159174230" lastFinishedPulling="2026-01-27 18:22:11.453918929 +0000 UTC m=+986.583201551" observedRunningTime="2026-01-27 18:22:12.090743047 +0000 UTC m=+987.220025699" watchObservedRunningTime="2026-01-27 18:22:12.092992722 +0000 UTC m=+987.222275344" Jan 27 18:22:16 crc kubenswrapper[4907]: I0127 18:22:16.833078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.079906 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.081818 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.155066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:19 crc kubenswrapper[4907]: I0127 18:22:19.165476 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:19 crc kubenswrapper[4907]: I0127 18:22:19.212516 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.157027 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wntx4" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" containerID="cri-o://5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" gracePeriod=2 Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.641421 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.784847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785916 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities" (OuterVolumeSpecName: "utilities") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.790971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8" (OuterVolumeSpecName: "kube-api-access-q4lz8") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "kube-api-access-q4lz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.854030 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887173 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887222 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887232 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164702 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" exitCode=0 Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164743 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a"} Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164792 4907 scope.go:117] "RemoveContainer" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164793 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.191086 4907 scope.go:117] "RemoveContainer" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.214529 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.218291 4907 scope.go:117] "RemoveContainer" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.220859 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.259153 4907 scope.go:117] "RemoveContainer" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.263954 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": container with ID starting with 5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97 not found: ID does not exist" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.264210 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} err="failed to get container status \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": rpc error: code = NotFound desc = could not find container \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": container with ID starting with 5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97 not found: ID does not exist" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.264319 4907 scope.go:117] "RemoveContainer" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.267086 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": container with ID starting with cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e not found: ID does not exist" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267133 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} err="failed to get container status \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": rpc error: code = NotFound desc = could not find container \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": container with ID starting with cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e not found: ID does not exist" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267159 4907 scope.go:117] "RemoveContainer" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.267788 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": container with ID starting with fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf not found: ID does not exist" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267813 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf"} err="failed to get container status \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": rpc error: code = NotFound desc = could not find container \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": container with ID starting with fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf not found: ID does not exist" Jan 27 18:22:23 crc kubenswrapper[4907]: I0127 18:22:23.760808 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" path="/var/lib/kubelet/pods/8fe6c326-a67b-4381-bdfa-8716d5caf5c8/volumes" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.255505 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b447cd8-v5z5k" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" containerID="cri-o://7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" gracePeriod=15 Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.774070 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b447cd8-v5z5k_19ce08bb-03eb-4088-9b1a-4d42adedf584/console/0.log" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.774541 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814106 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca" (OuterVolumeSpecName: "service-ca") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815381 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815432 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815457 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815487 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815945 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config" (OuterVolumeSpecName: "console-config") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816302 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816315 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816311 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816405 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.821789 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p" (OuterVolumeSpecName: "kube-api-access-h8s9p") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "kube-api-access-h8s9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.822794 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.823936 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918022 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918074 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918094 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918109 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918124 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256475 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b447cd8-v5z5k_19ce08bb-03eb-4088-9b1a-4d42adedf584/console/0.log" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256526 4907 generic.go:334] "Generic (PLEG): container finished" podID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" exitCode=2 Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256557 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerDied","Data":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256609 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerDied","Data":"d22d0be7c5012debcbe1ac6b1b934a7244865eb06d8f858be9fb3384ddfdb6a5"} Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256625 4907 scope.go:117] "RemoveContainer" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256650 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.277158 4907 scope.go:117] "RemoveContainer" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: E0127 18:22:34.277633 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": container with ID starting with 7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4 not found: ID does not exist" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.277688 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} err="failed to get container status \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": rpc error: code = NotFound desc = could not find container \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": container with ID starting with 7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4 not found: ID does not exist" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.287692 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.296368 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.695765 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696417 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-content" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696434 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-content" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696453 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696460 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696476 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-utilities" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696485 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-utilities" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696503 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696701 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.697964 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.699877 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.719347 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745825 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745901 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745958 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.759930 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" path="/var/lib/kubelet/pods/19ce08bb-03eb-4088-9b1a-4d42adedf584/volumes" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847917 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.848206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.848255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.866533 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:36 crc kubenswrapper[4907]: I0127 18:22:36.012735 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:36 crc kubenswrapper[4907]: I0127 18:22:36.491872 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286600 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="48edaa40224ebcfb864021e626ee6dc1bc7bb660ac9246fbc606d9b1c024fdba" exitCode=0 Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"48edaa40224ebcfb864021e626ee6dc1bc7bb660ac9246fbc606d9b1c024fdba"} Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerStarted","Data":"a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48"} Jan 27 18:22:39 crc kubenswrapper[4907]: I0127 18:22:39.306215 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="e1d8da94dc92c4d5492d43b230e90d21bf5d1cb1385a8d732758fa209550dec9" exitCode=0 Jan 27 18:22:39 crc kubenswrapper[4907]: I0127 18:22:39.306318 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"e1d8da94dc92c4d5492d43b230e90d21bf5d1cb1385a8d732758fa209550dec9"} Jan 27 18:22:40 crc kubenswrapper[4907]: I0127 18:22:40.314271 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="06559d3f4e691c90b6f22cf90945852e002676c5b31a8a9cc32461047b73fa73" exitCode=0 Jan 27 18:22:40 crc kubenswrapper[4907]: I0127 18:22:40.314324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"06559d3f4e691c90b6f22cf90945852e002676c5b31a8a9cc32461047b73fa73"} Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.661448 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735677 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735837 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.737952 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle" (OuterVolumeSpecName: "bundle") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.745015 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs" (OuterVolumeSpecName: "kube-api-access-s2cfs") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "kube-api-access-s2cfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.755063 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util" (OuterVolumeSpecName: "util") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838378 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838419 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838428 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332849 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48"} Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332908 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48" Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332945 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424024 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424695 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="util" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424707 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="util" Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424722 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424728 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424739 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="pull" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424745 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="pull" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424876 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.425366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427379 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427749 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427798 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427960 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.428389 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bbvxf" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.459343 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492812 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594236 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.601280 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.609965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.620094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.748122 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.762875 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.763971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.765694 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kvf5m" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.765924 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.772536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.776951 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801482 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801602 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801726 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.911330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.922097 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.924808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.141765 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.203774 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:51 crc kubenswrapper[4907]: W0127 18:22:51.212409 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a776a10_0883_468e_a8d3_087ca6429b1b.slice/crio-66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09 WatchSource:0}: Error finding container 66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09: Status 404 returned error can't find the container with id 66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09 Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.415470 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09"} Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.593616 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:51 crc kubenswrapper[4907]: W0127 18:22:51.596542 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202ff14a_3733_4ccf_8202_94fac75bdfc4.slice/crio-efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe WatchSource:0}: Error finding container efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe: Status 404 returned error can't find the container with id efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe Jan 27 18:22:52 crc kubenswrapper[4907]: I0127 18:22:52.425017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" event={"ID":"202ff14a-3733-4ccf-8202-94fac75bdfc4","Type":"ContainerStarted","Data":"efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe"} Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.454900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9"} Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.455267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.475293 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podStartSLOduration=2.467301732 podStartE2EDuration="5.475273742s" podCreationTimestamp="2026-01-27 18:22:50 +0000 UTC" firstStartedPulling="2026-01-27 18:22:51.216843426 +0000 UTC m=+1026.346126038" lastFinishedPulling="2026-01-27 18:22:54.224815436 +0000 UTC m=+1029.354098048" observedRunningTime="2026-01-27 18:22:55.471498413 +0000 UTC m=+1030.600781035" watchObservedRunningTime="2026-01-27 18:22:55.475273742 +0000 UTC m=+1030.604556354" Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.469494 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" event={"ID":"202ff14a-3733-4ccf-8202-94fac75bdfc4","Type":"ContainerStarted","Data":"8b725d1a8516b5a78160938808b4596cc405881f6830ed402500ba20d107018a"} Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.470042 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.494777 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podStartSLOduration=2.551671719 podStartE2EDuration="7.494760065s" podCreationTimestamp="2026-01-27 18:22:50 +0000 UTC" firstStartedPulling="2026-01-27 18:22:51.599789249 +0000 UTC m=+1026.729071871" lastFinishedPulling="2026-01-27 18:22:56.542877605 +0000 UTC m=+1031.672160217" observedRunningTime="2026-01-27 18:22:57.490818671 +0000 UTC m=+1032.620101293" watchObservedRunningTime="2026-01-27 18:22:57.494760065 +0000 UTC m=+1032.624042677" Jan 27 18:23:11 crc kubenswrapper[4907]: I0127 18:23:11.146696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:23:30 crc kubenswrapper[4907]: I0127 18:23:30.751940 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.615921 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-csdnr"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.620241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.623009 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.623258 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xkjzk" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.624625 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.625293 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.625614 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.627254 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.659797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.718580 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-597cv"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.719751 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725022 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-f6f7l" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725306 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725566 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726789 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726874 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726922 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.732372 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.733816 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.735698 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.756195 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828827 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828898 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828934 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828951 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828970 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828993 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829180 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829443 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829467 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829935 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.830684 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.852691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.855147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.856285 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.856527 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931261 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931352 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932301 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931382 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932406 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: E0127 18:23:31.932511 4907 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:23:31 crc kubenswrapper[4907]: E0127 18:23:31.932580 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist podName:aa958bdc-32c5-4e9f-841e-7427fdb87b31 nodeName:}" failed. No retries permitted until 2026-01-27 18:23:32.432544229 +0000 UTC m=+1067.561826841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist") pod "speaker-597cv" (UID: "aa958bdc-32c5-4e9f-841e-7427fdb87b31") : secret "metallb-memberlist" not found Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932678 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932722 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.935774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.936275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.937290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.943913 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.953523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.957343 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.963247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.054176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.436997 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.439791 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:32 crc kubenswrapper[4907]: E0127 18:23:32.439970 4907 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:23:32 crc kubenswrapper[4907]: E0127 18:23:32.440053 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist podName:aa958bdc-32c5-4e9f-841e-7427fdb87b31 nodeName:}" failed. No retries permitted until 2026-01-27 18:23:33.440030278 +0000 UTC m=+1068.569312890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist") pod "speaker-597cv" (UID: "aa958bdc-32c5-4e9f-841e-7427fdb87b31") : secret "metallb-memberlist" not found Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.490185 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:32 crc kubenswrapper[4907]: W0127 18:23:32.496065 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea123ce_4328_4379_8310_dbfff15acfbf.slice/crio-9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108 WatchSource:0}: Error finding container 9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108: Status 404 returned error can't find the container with id 9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108 Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.772991 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" event={"ID":"dd967d05-2ecd-4578-9c41-22e36ff088c1","Type":"ContainerStarted","Data":"77e378198fdb5d60bcf337a9a0f6ebad022f22c16fef0f7c9a8be0b52d275a12"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.774000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"362f1edb8a8b542e8b216720056c5f4b701ab36658be615fb2717a8e56ff9554"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.776130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"40cd7a5d80d218271bea8717a319835353921836aafb58ed3d0ec0874ec2a345"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.776182 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.464720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.479053 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.541415 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.809453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"b104306e13369ae74dcb61d58fc4b9e245d2a21e0102447b84cbdab27c73428e"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.810646 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.812987 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"41ab4c3ed7c09c2faaf72bc84c3c6abd7cb0fb785e2992c69cf2391343255b42"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.837239 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zfszb" podStartSLOduration=2.837222006 podStartE2EDuration="2.837222006s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:23:33.834418906 +0000 UTC m=+1068.963701588" watchObservedRunningTime="2026-01-27 18:23:33.837222006 +0000 UTC m=+1068.966504618" Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"5a0b72a1110ec8da615bc2ef5523a765fded0866c290167b1cc95c9f32799cfb"} Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"75e7123b6c18492aa2931147d03564faca2febccbc3878b7adf849854dd0e818"} Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832772 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-597cv" Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.857401 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-597cv" podStartSLOduration=3.857361873 podStartE2EDuration="3.857361873s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:23:34.850844426 +0000 UTC m=+1069.980127038" watchObservedRunningTime="2026-01-27 18:23:34.857361873 +0000 UTC m=+1069.986644485" Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.897734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" event={"ID":"dd967d05-2ecd-4578-9c41-22e36ff088c1","Type":"ContainerStarted","Data":"4731f805fb1c5df1bac62dacf64b38b7d6a53c73263c1e42e7b0f4105bbfff5d"} Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.898239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.899495 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="ed6bf069421f22bffc1737f2704c0195e297d0c41ec17739d12366b076e8edee" exitCode=0 Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.899519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"ed6bf069421f22bffc1737f2704c0195e297d0c41ec17739d12366b076e8edee"} Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.921852 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podStartSLOduration=2.382870139 podStartE2EDuration="9.921836119s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="2026-01-27 18:23:32.441506701 +0000 UTC m=+1067.570789313" lastFinishedPulling="2026-01-27 18:23:39.980472681 +0000 UTC m=+1075.109755293" observedRunningTime="2026-01-27 18:23:40.920184311 +0000 UTC m=+1076.049466973" watchObservedRunningTime="2026-01-27 18:23:40.921836119 +0000 UTC m=+1076.051118731" Jan 27 18:23:41 crc kubenswrapper[4907]: I0127 18:23:41.919850 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="3f6ebcb60e75c81fab0a67c08ec59f5b4352844736add1383a32c0735642654f" exitCode=0 Jan 27 18:23:41 crc kubenswrapper[4907]: I0127 18:23:41.920748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"3f6ebcb60e75c81fab0a67c08ec59f5b4352844736add1383a32c0735642654f"} Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.059612 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.932071 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="be05aa27079965e70410c1b2999f5d9b87c2d92d6a2c9bfec658c5af1d68ffee" exitCode=0 Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.932125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"be05aa27079965e70410c1b2999f5d9b87c2d92d6a2c9bfec658c5af1d68ffee"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.545989 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-597cv" Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958259 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"0dd0c024d285007081e7505fc159f97547e1723ffb1c8c7f43a625cf76b85def"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958307 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"b02b3f8eaa54b412861cb429e240b24487a525470003342b463ac187f4ff4975"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"82d338fa66a1b5f04505531c8c91400b8ffb7774a88ad8e48888fae516c073ae"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958334 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958345 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"6b21e7a354a1cf3b780d112ae0f5ebea2fdedd8a627d0ec104d29ce320c5bbbf"} Jan 27 18:23:44 crc kubenswrapper[4907]: I0127 18:23:44.976172 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"0c8ab571a6f468a591b9f8d3070624c93958577ecf6dbabfe9900f1ee9680097"} Jan 27 18:23:44 crc kubenswrapper[4907]: I0127 18:23:44.976395 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:45 crc kubenswrapper[4907]: I0127 18:23:45.018109 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-csdnr" podStartSLOduration=6.178597889 podStartE2EDuration="14.018079027s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="2026-01-27 18:23:32.110122264 +0000 UTC m=+1067.239404876" lastFinishedPulling="2026-01-27 18:23:39.949603402 +0000 UTC m=+1075.078886014" observedRunningTime="2026-01-27 18:23:45.007701468 +0000 UTC m=+1080.136984120" watchObservedRunningTime="2026-01-27 18:23:45.018079027 +0000 UTC m=+1080.147361679" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.621888 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.623777 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628088 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628118 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628147 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ds6m6" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.660689 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.694491 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.795801 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.815208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.943224 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.946071 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.986047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:47 crc kubenswrapper[4907]: I0127 18:23:47.414397 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:47 crc kubenswrapper[4907]: W0127 18:23:47.429907 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869bd25c_e49d_4825_9020_af568185847c.slice/crio-1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb WatchSource:0}: Error finding container 1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb: Status 404 returned error can't find the container with id 1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb Jan 27 18:23:48 crc kubenswrapper[4907]: I0127 18:23:48.008516 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerStarted","Data":"1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb"} Jan 27 18:23:49 crc kubenswrapper[4907]: I0127 18:23:49.988718 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.025310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerStarted","Data":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.041898 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sft9m" podStartSLOduration=1.745560766 podStartE2EDuration="4.041881115s" podCreationTimestamp="2026-01-27 18:23:46 +0000 UTC" firstStartedPulling="2026-01-27 18:23:47.432509638 +0000 UTC m=+1082.561792240" lastFinishedPulling="2026-01-27 18:23:49.728829977 +0000 UTC m=+1084.858112589" observedRunningTime="2026-01-27 18:23:50.041217756 +0000 UTC m=+1085.170500408" watchObservedRunningTime="2026-01-27 18:23:50.041881115 +0000 UTC m=+1085.171163737" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.596699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.597628 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.625775 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.672330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.775162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.807492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.930484 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.036004 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sft9m" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" containerID="cri-o://5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" gracePeriod=2 Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.420463 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.441271 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.490747 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"869bd25c-e49d-4825-9020-af568185847c\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.500356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk" (OuterVolumeSpecName: "kube-api-access-2dgfk") pod "869bd25c-e49d-4825-9020-af568185847c" (UID: "869bd25c-e49d-4825-9020-af568185847c"). InnerVolumeSpecName "kube-api-access-2dgfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.593539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") on node \"crc\" DevicePath \"\"" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.962414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045880 4907 generic.go:334] "Generic (PLEG): container finished" podID="869bd25c-e49d-4825-9020-af568185847c" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" exitCode=0 Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045940 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045979 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerDied","Data":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.046013 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerDied","Data":"1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.046034 4907 scope.go:117] "RemoveContainer" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.048115 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xc2fp" event={"ID":"0a849662-db42-42f0-9317-eb3714b775d0","Type":"ContainerStarted","Data":"7a3ddecc4666a1edd6e6be0a48822a7458971e5e1454d6da56d6ff456c68ae08"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.048158 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xc2fp" event={"ID":"0a849662-db42-42f0-9317-eb3714b775d0","Type":"ContainerStarted","Data":"abfac714ead85d81126d896f8c3f75fd9b29184b3cec584eb875cc9abd336e78"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.068786 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.069302 4907 scope.go:117] "RemoveContainer" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: E0127 18:23:52.069793 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": container with ID starting with 5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc not found: ID does not exist" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.069847 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} err="failed to get container status \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": rpc error: code = NotFound desc = could not find container \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": container with ID starting with 5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc not found: ID does not exist" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.076129 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.080529 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xc2fp" podStartSLOduration=2.015865799 podStartE2EDuration="2.080510101s" podCreationTimestamp="2026-01-27 18:23:50 +0000 UTC" firstStartedPulling="2026-01-27 18:23:51.431124914 +0000 UTC m=+1086.560407526" lastFinishedPulling="2026-01-27 18:23:51.495769206 +0000 UTC m=+1086.625051828" observedRunningTime="2026-01-27 18:23:52.076104574 +0000 UTC m=+1087.205387186" watchObservedRunningTime="2026-01-27 18:23:52.080510101 +0000 UTC m=+1087.209792713" Jan 27 18:23:53 crc kubenswrapper[4907]: I0127 18:23:53.764010 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869bd25c-e49d-4825-9020-af568185847c" path="/var/lib/kubelet/pods/869bd25c-e49d-4825-9020-af568185847c/volumes" Jan 27 18:23:56 crc kubenswrapper[4907]: I0127 18:23:56.521393 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:23:56 crc kubenswrapper[4907]: I0127 18:23:56.521813 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.931801 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.932356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.966261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:01 crc kubenswrapper[4907]: I0127 18:24:01.185278 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:01 crc kubenswrapper[4907]: I0127 18:24:01.949364 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.476057 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:03 crc kubenswrapper[4907]: E0127 18:24:03.477840 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.477855 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.478175 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.482238 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.486918 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.491452 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hksc6" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622069 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622146 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622250 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.724676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.724792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.725057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.726022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.726218 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.763291 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.818389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:04 crc kubenswrapper[4907]: I0127 18:24:04.335209 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:04 crc kubenswrapper[4907]: W0127 18:24:04.339629 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e27b41_8fcc_441c_a1cd_0cfedddea164.slice/crio-8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7 WatchSource:0}: Error finding container 8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7: Status 404 returned error can't find the container with id 8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7 Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176416 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="cac48695ec0c97af9ca5a93963057747a0b5161e5d0e7a0ba604b4976826d315" exitCode=0 Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176545 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"cac48695ec0c97af9ca5a93963057747a0b5161e5d0e7a0ba604b4976826d315"} Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176935 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerStarted","Data":"8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7"} Jan 27 18:24:06 crc kubenswrapper[4907]: I0127 18:24:06.190492 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="c3c0a8d8a060e5f546963619abcca744668071adda6be559b5e994cea6ef285a" exitCode=0 Jan 27 18:24:06 crc kubenswrapper[4907]: I0127 18:24:06.190544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"c3c0a8d8a060e5f546963619abcca744668071adda6be559b5e994cea6ef285a"} Jan 27 18:24:07 crc kubenswrapper[4907]: I0127 18:24:07.206289 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="ba21fe2dffcf5dfcdf72dc7ae8756238536aea063b6110845e724fb7077afa64" exitCode=0 Jan 27 18:24:07 crc kubenswrapper[4907]: I0127 18:24:07.206355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"ba21fe2dffcf5dfcdf72dc7ae8756238536aea063b6110845e724fb7077afa64"} Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.666185 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738273 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.739203 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle" (OuterVolumeSpecName: "bundle") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.748904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc" (OuterVolumeSpecName: "kube-api-access-sjxcc") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "kube-api-access-sjxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.758750 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util" (OuterVolumeSpecName: "util") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840379 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840443 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840464 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238066 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7"} Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238518 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7" Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238152 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.478921 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.479943 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="util" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.479960 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="util" Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.479984 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.479990 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.480007 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="pull" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480014 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="pull" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480170 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480767 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.483189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sf6qj" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.506601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.567072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.668147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.688101 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.800400 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:16 crc kubenswrapper[4907]: I0127 18:24:16.254781 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:16 crc kubenswrapper[4907]: W0127 18:24:16.264982 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22de95d_f437_432c_917a_a08c082e02c4.slice/crio-c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b WatchSource:0}: Error finding container c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b: Status 404 returned error can't find the container with id c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b Jan 27 18:24:16 crc kubenswrapper[4907]: I0127 18:24:16.298030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b"} Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.343308 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06"} Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.344157 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.402850 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podStartSLOduration=2.223204388 podStartE2EDuration="6.402828078s" podCreationTimestamp="2026-01-27 18:24:15 +0000 UTC" firstStartedPulling="2026-01-27 18:24:16.267230409 +0000 UTC m=+1111.396513021" lastFinishedPulling="2026-01-27 18:24:20.446854099 +0000 UTC m=+1115.576136711" observedRunningTime="2026-01-27 18:24:21.395220149 +0000 UTC m=+1116.524502841" watchObservedRunningTime="2026-01-27 18:24:21.402828078 +0000 UTC m=+1116.532110720" Jan 27 18:24:25 crc kubenswrapper[4907]: I0127 18:24:25.804065 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:26 crc kubenswrapper[4907]: I0127 18:24:26.521843 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:26 crc kubenswrapper[4907]: I0127 18:24:26.522152 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.351171 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.352447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.358341 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-whv2v" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.364501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.418118 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.419183 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.421191 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2t89z" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.433703 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.434855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.438546 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-twd54" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.441436 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.449068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.449694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.458843 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.459863 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.467069 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5wmp4" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.467372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.478699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.491017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.495674 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qs7qz" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.510591 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.523024 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.531905 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h96k6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.541250 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551448 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551532 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551672 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551702 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.564730 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.572972 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.574443 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.581314 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p795z" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.590103 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.591349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.596460 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.596993 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-w76rv" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.599573 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.653885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.653981 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654005 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654035 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654068 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654212 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.666051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.678541 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.680187 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690471 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690851 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-whmjg" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.695277 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.700228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.700235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.703067 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.740103 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.741564 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.744134 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8dgzx" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.755999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757646 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757793 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757915 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.758006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: E0127 18:24:46.758642 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:46 crc kubenswrapper[4907]: E0127 18:24:46.758710 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.258692446 +0000 UTC m=+1142.387975058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.765389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.779297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.790310 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.790795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.795879 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.797245 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.808638 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.832612 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.838737 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.848749 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.848838 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.853745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.855784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.853800 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sgf77" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.859104 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.859244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.860358 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zxqc9" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.860850 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.864450 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.869380 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8tbmw" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.870520 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.883111 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.883304 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.889881 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.891213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.893359 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.895699 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b84cs" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.937418 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.949748 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.951930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.954325 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.954537 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7x8cp" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.959740 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960703 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960900 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.961010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.962969 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vd9bf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.967667 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.967719 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.976337 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.977380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.980901 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xj5jr" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.986542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.992216 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.000813 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.010351 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.018970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.036710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.037912 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.046320 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.047853 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g8xgp" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.055015 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.056362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.059351 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7hms8" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.068893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069267 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069442 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069517 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.076882 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.088499 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.102715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.103595 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.105977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.121562 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.122671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.128015 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.133353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-25mbl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.137503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.163520 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172344 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172466 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172751 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.173072 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.173254 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.673237038 +0000 UTC m=+1142.802519650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.185609 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.195009 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.200036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.201486 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.203227 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.204193 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.237376 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.258034 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.261993 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.264482 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-888v4" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276026 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276601 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276638 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.277374 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.277426 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.277409399 +0000 UTC m=+1143.406692011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.295477 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.300474 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.317252 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.318673 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.319158 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.322036 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.322178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xl8wk" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.335836 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.358038 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378422 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378476 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378873 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.380295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378454 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.389342 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.389456 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.393623 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jdtrj" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.458340 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.477636 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481944 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481984 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.482008 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482176 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482220 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.982206768 +0000 UTC m=+1143.111489380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482827 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482918 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.982890648 +0000 UTC m=+1143.112173290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.488206 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.502085 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.504411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.507128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.514917 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.526926 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.583663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.602942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.606422 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.662168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" event={"ID":"e6378a4c-96e5-4151-a0ca-c320fa9b667d","Type":"ContainerStarted","Data":"2a913f8c674a159c3278174d993c98323f0c917de6665906c45c075a660e2217"} Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.685448 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.685929 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.685995 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.685976678 +0000 UTC m=+1143.815259290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.691546 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.710919 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.988098 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.991456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.991503 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992230 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992231 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992303 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.992280882 +0000 UTC m=+1144.121563564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992349 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.992322323 +0000 UTC m=+1144.121604995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.994877 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.010184 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.014068 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277579e8_58c3_4ad7_b902_e62f045ba8c6.slice/crio-a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133 WatchSource:0}: Error finding container a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133: Status 404 returned error can't find the container with id a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133 Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.015902 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a64f11_d6ef_487e_afa3_1d9bdbea9424.slice/crio-24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab WatchSource:0}: Error finding container 24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab: Status 404 returned error can't find the container with id 24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.016947 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.174055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.180894 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode257f81e_9460_4391_a7a5_cca3fc9230d9.slice/crio-bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1 WatchSource:0}: Error finding container bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1: Status 404 returned error can't find the container with id bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1 Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.225048 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6ebe7e_320a_4193_8db4_3d4574ba1c3b.slice/crio-6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5 WatchSource:0}: Error finding container 6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5: Status 404 returned error can't find the container with id 6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5 Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.282337 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.290712 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.314365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.314524 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.314598 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:50.314581726 +0000 UTC m=+1145.443864338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.676262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.678037 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" event={"ID":"277579e8-58c3-4ad7-b902-e62f045ba8c6","Type":"ContainerStarted","Data":"a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.680687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"6791a1a05cf7edf0e7799004c25a16ca301d30e4fd67e266ecb7b412401354c2"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.682246 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"e095b08c4cc61a5961eb4e1cef0d113446ad75718ba0e98ec2f466b23cd4eaaf"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.683387 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" event={"ID":"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67","Type":"ContainerStarted","Data":"29416602d1c8f7ccd84c959906995921afdfacd328c4887e69f28aaf840355ac"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.684655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.686493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"6113f37ffc7c7c00f5ec531319dc966276b9517833070b64257bac01ded176b7"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.687393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" event={"ID":"e257f81e-9460-4391-a7a5-cca3fc9230d9","Type":"ContainerStarted","Data":"bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.729227 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.729415 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.729479 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:50.729461788 +0000 UTC m=+1145.858744400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.036047 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.036098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036269 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036368 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:51.036347498 +0000 UTC m=+1146.165630190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036422 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036658 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:51.036648867 +0000 UTC m=+1146.165931539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.071148 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.092661 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.124021 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.126742 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b8e76f_853f_4eeb_b6c5_e77d05bec357.slice/crio-d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373 WatchSource:0}: Error finding container d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373: Status 404 returned error can't find the container with id d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373 Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.151664 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.173030 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.173468 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24caa967_ac26_4666_bf41_e2c4bc6ebb0f.slice/crio-64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9 WatchSource:0}: Error finding container 64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9: Status 404 returned error can't find the container with id 64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.173716 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5a8eee_f06b_4376_90d6_ff3faef0e8af.slice/crio-8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e WatchSource:0}: Error finding container 8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e: Status 404 returned error can't find the container with id 8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.184806 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba33cbc9_9a56_4c45_8c07_19b4110e03c3.slice/crio-a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f WatchSource:0}: Error finding container a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f: Status 404 returned error can't find the container with id a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.192040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.201116 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.212034 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.219450 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda733096f_e99d_4186_8542_1d8cb16012d2.slice/crio-c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74 WatchSource:0}: Error finding container c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74: Status 404 returned error can't find the container with id c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.220691 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84f4e53_c1de_49a3_8435_5e4999a034fd.slice/crio-fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4 WatchSource:0}: Error finding container fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4: Status 404 returned error can't find the container with id fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.223248 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e8fa01_e75c_41bc_bfbb_affea0fcc0a2.slice/crio-91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a WatchSource:0}: Error finding container 91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a: Status 404 returned error can't find the container with id 91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.224362 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.224870 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ch4p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-mpgzf_openstack-operators(f84f4e53-c1de-49a3-8435-5e4999a034fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.224871 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxrm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-tn4d6_openstack-operators(a733096f-e99d-4186-8542-1d8cb16012d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.226366 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.226402 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.247656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.259011 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.697147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" event={"ID":"12b8e76f-853f-4eeb-b6c5-e77d05bec357","Type":"ContainerStarted","Data":"d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.698616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.700093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"3277a2d65aa2648fa54a90b7b9c49bfd972a11487493def4251c9975a7afc309"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.701866 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" event={"ID":"24caa967-ac26-4666-bf41-e2c4bc6ebb0f","Type":"ContainerStarted","Data":"64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.703015 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"7049b97a9434791020e330fe3bda0152252d9f6d0df88d2336280b23da8b7908"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.704101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" event={"ID":"ba33cbc9-9a56-4c45-8c07-19b4110e03c3","Type":"ContainerStarted","Data":"a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.705224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"c89c168165313113ef77f1cf2263012e2324d10820f794ce3d2f188a8964bbb0"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.706502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" event={"ID":"f84f4e53-c1de-49a3-8435-5e4999a034fd","Type":"ContainerStarted","Data":"fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.711084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"535c6c6acf6867e03495967a52c6a00758ec204249df8dc05a49e1f4680dcec3"} Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.712337 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.712565 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74"} Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.714186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.715093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" event={"ID":"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2","Type":"ContainerStarted","Data":"91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a"} Jan 27 18:24:50 crc kubenswrapper[4907]: I0127 18:24:50.358942 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.359328 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.359378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:54.359361369 +0000 UTC m=+1149.488643981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.730789 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.731331 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:50 crc kubenswrapper[4907]: I0127 18:24:50.765994 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.766251 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.766358 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:54.766335812 +0000 UTC m=+1149.895618424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: I0127 18:24:51.071199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:51 crc kubenswrapper[4907]: I0127 18:24:51.071269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071374 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071450 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071465 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:55.071445572 +0000 UTC m=+1150.200728184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071535 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:55.071516244 +0000 UTC m=+1150.200798856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: I0127 18:24:54.390622 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.390789 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.391221 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:02.391202042 +0000 UTC m=+1157.520484654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: I0127 18:24:54.801398 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.801660 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.801744 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:02.801725568 +0000 UTC m=+1157.931008180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: I0127 18:24:55.110170 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:55 crc kubenswrapper[4907]: I0127 18:24:55.110292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.110759 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.110873 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:03.110846142 +0000 UTC m=+1158.240128774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.111337 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.111526 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:03.111478631 +0000 UTC m=+1158.240761243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521203 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521494 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521548 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.522341 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.522409 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" gracePeriod=600 Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782644 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" exitCode=0 Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782749 4907 scope.go:117] "RemoveContainer" containerID="6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.027960 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.028126 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slszq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-nznnn_openstack-operators(018e0dfe-5282-40d5-87db-8551645d6e02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.029295 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.821167 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.292159 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.292421 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xr8dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-b29cj_openstack-operators(f1ed42c6-98ac-41b8-96df-24919c0f9837): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.293604 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.803300 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.803547 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdgpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-4nlx7_openstack-operators(e9f20d2f-16bf-49df-9c41-6fd6faa6ef67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.805455 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.846605 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.846765 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.330663 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.330902 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5nnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_openstack-operators(f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.332141 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.461369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.468403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.713701 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.846135 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.868691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.872578 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.074591 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.173528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.173606 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:03 crc kubenswrapper[4907]: E0127 18:25:03.173931 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:25:03 crc kubenswrapper[4907]: E0127 18:25:03.174060 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:19.174026075 +0000 UTC m=+1174.303308727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.183068 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.061170 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.061761 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44ftf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-6lprh_openstack-operators(277579e8-58c3-4ad7-b902-e62f045ba8c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.063006 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.585018 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.585194 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhw5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-l2pdl_openstack-operators(774ac09a-4164-4e22-9ea2-385ac4ef87eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.588692 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.862739 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.863067 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.609756 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.610391 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2xqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-7hgqc_openstack-operators(a05cfe48-4bf5-4199-aefa-de59259798c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.611678 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.890999 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.105895 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.106078 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6sgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-ph8fw_openstack-operators(7f5a8eee-f06b-4376-90d6-ff3faef0e8af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.108397 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.592880 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.593363 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kszk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-fnh99_openstack-operators(bd2d065d-dd6e-43bc-a725-e7fe52c024b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.594715 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.910214 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.910471 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.049747 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.049983 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-kjhgn_openstack-operators(e257f81e-9460-4391-a7a5-cca3fc9230d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.051235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131111 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131272 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131943 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lb4hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7567458d64-vvlhm_openstack-operators(12b8e76f-853f-4eeb-b6c5-e77d05bec357): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.133176 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.588160 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.588339 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gfl97_openstack-operators(a4aa00b3-8a54-4f84-907d-34a73b93944f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.589840 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podUID="a4aa00b3-8a54-4f84-907d-34a73b93944f" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.916907 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podUID="a4aa00b3-8a54-4f84-907d-34a73b93944f" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.916926 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.917186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.158435 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:25:12 crc kubenswrapper[4907]: W0127 18:25:12.166520 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6ac148_bc7a_4480_9155_8f78567a5070.slice/crio-871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418 WatchSource:0}: Error finding container 871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418: Status 404 returned error can't find the container with id 871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418 Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.184522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:25:12 crc kubenswrapper[4907]: W0127 18:25:12.194825 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6e2a40_e233_4dbe_9b63_0fecf3fc1487.slice/crio-34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c WatchSource:0}: Error finding container 34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c: Status 404 returned error can't find the container with id 34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.944007 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.945645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.946076 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.946924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" event={"ID":"7c6ac148-bc7a-4480-9155-8f78567a5070","Type":"ContainerStarted","Data":"871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.948096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.949702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" event={"ID":"24caa967-ac26-4666-bf41-e2c4bc6ebb0f","Type":"ContainerStarted","Data":"aaa792a6850bf30c995f2388705bd61ee5da27dcd36f052b9830b50ce66d4f57"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.949857 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.951322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.951414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.952886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" event={"ID":"ba33cbc9-9a56-4c45-8c07-19b4110e03c3","Type":"ContainerStarted","Data":"d8bdbbf49db6fc27563d452c8162685379dba3541a682d34067598486eb1c5f7"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.953019 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.954205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" event={"ID":"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2","Type":"ContainerStarted","Data":"dc11e3612339e33d628cef05cf8bb6c9ba5cc25baf6be59cd4475859647a42fa"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.954309 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.955493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" event={"ID":"f84f4e53-c1de-49a3-8435-5e4999a034fd","Type":"ContainerStarted","Data":"7d08e58ef5420364403ed35c827110949685dd78c2f0d6d0a9ef915cc60cb69b"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.955636 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.956882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" event={"ID":"e6378a4c-96e5-4151-a0ca-c320fa9b667d","Type":"ContainerStarted","Data":"f91c4972af323848ffd12c798863c07bd74711b0d6bb5069960abf56894032b3"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.957146 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.958248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.958523 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.000263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" podStartSLOduration=4.414772637 podStartE2EDuration="27.00024021s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.545334947 +0000 UTC m=+1142.674617559" lastFinishedPulling="2026-01-27 18:25:10.13080252 +0000 UTC m=+1165.260085132" observedRunningTime="2026-01-27 18:25:13.00022878 +0000 UTC m=+1168.129511392" watchObservedRunningTime="2026-01-27 18:25:13.00024021 +0000 UTC m=+1168.129522822" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.015958 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podStartSLOduration=7.081265651 podStartE2EDuration="27.015941601s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.178273206 +0000 UTC m=+1144.307555818" lastFinishedPulling="2026-01-27 18:25:09.112949156 +0000 UTC m=+1164.242231768" observedRunningTime="2026-01-27 18:25:13.015251891 +0000 UTC m=+1168.144534503" watchObservedRunningTime="2026-01-27 18:25:13.015941601 +0000 UTC m=+1168.145224213" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.032713 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podStartSLOduration=4.476880348 podStartE2EDuration="27.032696131s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224755725 +0000 UTC m=+1144.354038337" lastFinishedPulling="2026-01-27 18:25:11.780571508 +0000 UTC m=+1166.909854120" observedRunningTime="2026-01-27 18:25:13.027755819 +0000 UTC m=+1168.157038431" watchObservedRunningTime="2026-01-27 18:25:13.032696131 +0000 UTC m=+1168.161978743" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.042304 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podStartSLOduration=6.195429004 podStartE2EDuration="27.042283836s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.188671566 +0000 UTC m=+1144.317954168" lastFinishedPulling="2026-01-27 18:25:10.035526348 +0000 UTC m=+1165.164809000" observedRunningTime="2026-01-27 18:25:13.038851538 +0000 UTC m=+1168.168134150" watchObservedRunningTime="2026-01-27 18:25:13.042283836 +0000 UTC m=+1168.171566448" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.061099 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podStartSLOduration=5.151314671 podStartE2EDuration="27.061082395s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224742595 +0000 UTC m=+1144.354025207" lastFinishedPulling="2026-01-27 18:25:11.134510319 +0000 UTC m=+1166.263792931" observedRunningTime="2026-01-27 18:25:13.058353177 +0000 UTC m=+1168.187635789" watchObservedRunningTime="2026-01-27 18:25:13.061082395 +0000 UTC m=+1168.190365007" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.072302 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podStartSLOduration=5.265536494 podStartE2EDuration="27.072288487s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.228869917 +0000 UTC m=+1143.358152519" lastFinishedPulling="2026-01-27 18:25:10.0356219 +0000 UTC m=+1165.164904512" observedRunningTime="2026-01-27 18:25:13.070707821 +0000 UTC m=+1168.199990423" watchObservedRunningTime="2026-01-27 18:25:13.072288487 +0000 UTC m=+1168.201571099" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.118642 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podStartSLOduration=4.582461017 podStartE2EDuration="27.118625616s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224751475 +0000 UTC m=+1144.354034087" lastFinishedPulling="2026-01-27 18:25:11.760916084 +0000 UTC m=+1166.890198686" observedRunningTime="2026-01-27 18:25:13.102918885 +0000 UTC m=+1168.232201507" watchObservedRunningTime="2026-01-27 18:25:13.118625616 +0000 UTC m=+1168.247908228" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.120887 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podStartSLOduration=15.127269096 podStartE2EDuration="27.12087876s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.017956121 +0000 UTC m=+1143.147238733" lastFinishedPulling="2026-01-27 18:25:00.011565785 +0000 UTC m=+1155.140848397" observedRunningTime="2026-01-27 18:25:13.115387253 +0000 UTC m=+1168.244669875" watchObservedRunningTime="2026-01-27 18:25:13.12087876 +0000 UTC m=+1168.250161372" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.165478 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.321345 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.461436 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.495580 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.504209 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.609729 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.999081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" event={"ID":"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67","Type":"ContainerStarted","Data":"240a72c5b52349d6b5bef7a2cbbe50b43517d40b7fec57cdf1e23e733eff2b3f"} Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.999346 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:25:18 crc kubenswrapper[4907]: I0127 18:25:18.022445 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podStartSLOduration=3.020981698 podStartE2EDuration="32.022404444s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.999455389 +0000 UTC m=+1143.128738001" lastFinishedPulling="2026-01-27 18:25:17.000878115 +0000 UTC m=+1172.130160747" observedRunningTime="2026-01-27 18:25:18.014718463 +0000 UTC m=+1173.144001075" watchObservedRunningTime="2026-01-27 18:25:18.022404444 +0000 UTC m=+1173.151687056" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.008096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.008708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.009355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.009991 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.011114 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" event={"ID":"7c6ac148-bc7a-4480-9155-8f78567a5070","Type":"ContainerStarted","Data":"212e88fff355323ad386c5b1bf1a33363f24e77fe11d9ecb10ee883253b1232a"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.011512 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.012478 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.012867 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.014146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.014505 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.015923 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.016256 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.028127 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podStartSLOduration=4.127233594 podStartE2EDuration="33.028110499s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.103848292 +0000 UTC m=+1144.233130904" lastFinishedPulling="2026-01-27 18:25:18.004725197 +0000 UTC m=+1173.134007809" observedRunningTime="2026-01-27 18:25:19.024742302 +0000 UTC m=+1174.154024914" watchObservedRunningTime="2026-01-27 18:25:19.028110499 +0000 UTC m=+1174.157393121" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.041631 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podStartSLOduration=3.342441696 podStartE2EDuration="33.041615026s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.23070801 +0000 UTC m=+1143.359990622" lastFinishedPulling="2026-01-27 18:25:17.92988134 +0000 UTC m=+1173.059163952" observedRunningTime="2026-01-27 18:25:19.040609737 +0000 UTC m=+1174.169892349" watchObservedRunningTime="2026-01-27 18:25:19.041615026 +0000 UTC m=+1174.170897638" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.065186 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podStartSLOduration=2.873155174 podStartE2EDuration="33.065170081s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.734670871 +0000 UTC m=+1142.863953483" lastFinishedPulling="2026-01-27 18:25:17.926685778 +0000 UTC m=+1173.055968390" observedRunningTime="2026-01-27 18:25:19.060778926 +0000 UTC m=+1174.190061538" watchObservedRunningTime="2026-01-27 18:25:19.065170081 +0000 UTC m=+1174.194452693" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.110114 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podStartSLOduration=4.013508092 podStartE2EDuration="33.11009278s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.103882753 +0000 UTC m=+1144.233165355" lastFinishedPulling="2026-01-27 18:25:18.200467421 +0000 UTC m=+1173.329750043" observedRunningTime="2026-01-27 18:25:19.108602887 +0000 UTC m=+1174.237885499" watchObservedRunningTime="2026-01-27 18:25:19.11009278 +0000 UTC m=+1174.239375392" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.130665 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podStartSLOduration=27.372505757 podStartE2EDuration="33.130642799s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:25:12.168545026 +0000 UTC m=+1167.297827638" lastFinishedPulling="2026-01-27 18:25:17.926682068 +0000 UTC m=+1173.055964680" observedRunningTime="2026-01-27 18:25:19.130083533 +0000 UTC m=+1174.259366135" watchObservedRunningTime="2026-01-27 18:25:19.130642799 +0000 UTC m=+1174.259925411" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.160461 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podStartSLOduration=27.311594119 podStartE2EDuration="33.160440334s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:25:12.19694631 +0000 UTC m=+1167.326228922" lastFinishedPulling="2026-01-27 18:25:18.045792535 +0000 UTC m=+1173.175075137" observedRunningTime="2026-01-27 18:25:19.15960799 +0000 UTC m=+1174.288890622" watchObservedRunningTime="2026-01-27 18:25:19.160440334 +0000 UTC m=+1174.289722946" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.186453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.203493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.458959 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: W0127 18:25:19.945770 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7707f450_bf8d_4e84_9baa_a02bc80a0b22.slice/crio-dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4 WatchSource:0}: Error finding container dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4: Status 404 returned error can't find the container with id dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4 Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.945841 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.022769 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" event={"ID":"277579e8-58c3-4ad7-b902-e62f045ba8c6","Type":"ContainerStarted","Data":"8687d1219ca964b3de928510b6385dc80da90b8353abdc933b1e6113258ed971"} Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.022979 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.025105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" event={"ID":"7707f450-bf8d-4e84-9baa-a02bc80a0b22","Type":"ContainerStarted","Data":"dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4"} Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.036832 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podStartSLOduration=2.887386598 podStartE2EDuration="34.03681502s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.017360014 +0000 UTC m=+1143.146642626" lastFinishedPulling="2026-01-27 18:25:19.166788436 +0000 UTC m=+1174.296071048" observedRunningTime="2026-01-27 18:25:20.036163311 +0000 UTC m=+1175.165445933" watchObservedRunningTime="2026-01-27 18:25:20.03681502 +0000 UTC m=+1175.166097632" Jan 27 18:25:21 crc kubenswrapper[4907]: I0127 18:25:21.035800 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" event={"ID":"7707f450-bf8d-4e84-9baa-a02bc80a0b22","Type":"ContainerStarted","Data":"7cfc45be3b2e07dc1c7c5e34289626706c60ade81bc2578f1a6bc9c764b8726b"} Jan 27 18:25:21 crc kubenswrapper[4907]: I0127 18:25:21.076859 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podStartSLOduration=34.07684097 podStartE2EDuration="34.07684097s" podCreationTimestamp="2026-01-27 18:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:21.064470835 +0000 UTC m=+1176.193753457" watchObservedRunningTime="2026-01-27 18:25:21.07684097 +0000 UTC m=+1176.206123582" Jan 27 18:25:22 crc kubenswrapper[4907]: I0127 18:25:22.045457 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:23 crc kubenswrapper[4907]: I0127 18:25:23.083774 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.069816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e"} Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.070809 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.072280 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6"} Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.072544 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.098724 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podStartSLOduration=2.8572543809999997 podStartE2EDuration="38.098697841s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.99464893 +0000 UTC m=+1143.123931542" lastFinishedPulling="2026-01-27 18:25:23.23609238 +0000 UTC m=+1178.365375002" observedRunningTime="2026-01-27 18:25:24.088634312 +0000 UTC m=+1179.217917004" watchObservedRunningTime="2026-01-27 18:25:24.098697841 +0000 UTC m=+1179.227980493" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.112300 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podStartSLOduration=4.062847048 podStartE2EDuration="38.1122666s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.186208365 +0000 UTC m=+1144.315490977" lastFinishedPulling="2026-01-27 18:25:23.235627917 +0000 UTC m=+1178.364910529" observedRunningTime="2026-01-27 18:25:24.105361692 +0000 UTC m=+1179.234644344" watchObservedRunningTime="2026-01-27 18:25:24.1122666 +0000 UTC m=+1179.241549252" Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.085657 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da"} Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.086628 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.114113 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podStartSLOduration=4.064147139 podStartE2EDuration="39.114075924s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.129655296 +0000 UTC m=+1144.258937908" lastFinishedPulling="2026-01-27 18:25:24.179584071 +0000 UTC m=+1179.308866693" observedRunningTime="2026-01-27 18:25:25.102283026 +0000 UTC m=+1180.231565638" watchObservedRunningTime="2026-01-27 18:25:25.114075924 +0000 UTC m=+1180.243358576" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.094498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" event={"ID":"12b8e76f-853f-4eeb-b6c5-e77d05bec357","Type":"ContainerStarted","Data":"e1d9f4b07a05b53784b99f794aec021f7adb82a7ee18f9d2c992d0210f48e64b"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.094802 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.096286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.097432 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" event={"ID":"e257f81e-9460-4391-a7a5-cca3fc9230d9","Type":"ContainerStarted","Data":"afe0cabed815da7093f1942f41aed4f24204bb15d7cb9b08b6b20e3098e26d17"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.097839 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.113617 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podStartSLOduration=3.433439108 podStartE2EDuration="40.113600011s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.129652366 +0000 UTC m=+1144.258934978" lastFinishedPulling="2026-01-27 18:25:25.809813269 +0000 UTC m=+1180.939095881" observedRunningTime="2026-01-27 18:25:26.107522437 +0000 UTC m=+1181.236805049" watchObservedRunningTime="2026-01-27 18:25:26.113600011 +0000 UTC m=+1181.242882623" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.136003 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podStartSLOduration=3.219314569 podStartE2EDuration="39.135980513s" podCreationTimestamp="2026-01-27 18:24:47 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.21692967 +0000 UTC m=+1144.346212282" lastFinishedPulling="2026-01-27 18:25:25.133595614 +0000 UTC m=+1180.262878226" observedRunningTime="2026-01-27 18:25:26.125275006 +0000 UTC m=+1181.254557638" watchObservedRunningTime="2026-01-27 18:25:26.135980513 +0000 UTC m=+1181.265263125" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.170924 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podStartSLOduration=3.1943348719999998 podStartE2EDuration="40.170894645s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.183280964 +0000 UTC m=+1143.312563566" lastFinishedPulling="2026-01-27 18:25:25.159840717 +0000 UTC m=+1180.289123339" observedRunningTime="2026-01-27 18:25:26.156329817 +0000 UTC m=+1181.285612459" watchObservedRunningTime="2026-01-27 18:25:26.170894645 +0000 UTC m=+1181.300177277" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.696260 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.773419 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.778404 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.835937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.888481 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.996687 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:25:27 crc kubenswrapper[4907]: I0127 18:25:27.188459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:25:27 crc kubenswrapper[4907]: I0127 18:25:27.209016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:25:29 crc kubenswrapper[4907]: I0127 18:25:29.465396 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:32 crc kubenswrapper[4907]: I0127 18:25:32.721753 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:36 crc kubenswrapper[4907]: I0127 18:25:36.800067 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.134163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.244275 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.518600 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.530035 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.275615 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.277448 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.282340 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.282423 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5p94z" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.284693 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.293005 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.293408 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.347699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.349151 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.355133 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.356935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.356998 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357091 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.363532 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459117 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459146 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459240 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460118 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460277 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.479238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.479703 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.600215 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.667263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.078537 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.162202 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:56 crc kubenswrapper[4907]: W0127 18:25:56.162350 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a60a3a1_171b_4ea9_b6cc_a20aa1e219c3.slice/crio-dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db WatchSource:0}: Error finding container dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db: Status 404 returned error can't find the container with id dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.554906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" event={"ID":"145e21b1-c3a2-4057-a5e0-07e7d4196563","Type":"ContainerStarted","Data":"652e65a79ff5cb6213aeb319351d17981ccbb6f22938bc88043e8eeb5ebe6be2"} Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.556941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" event={"ID":"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3","Type":"ContainerStarted","Data":"dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db"} Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.103637 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.147717 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.149532 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.196731 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.210848 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.210950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.211049 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.316706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.316991 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.317046 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.319093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.319985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.385072 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.449157 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.481927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.483987 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.488140 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.499666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.628221 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.629098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.678226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.966811 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.212023 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:59 crc kubenswrapper[4907]: W0127 18:25:59.222681 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode10199f9_f072_4566_ad76_a99c49596214.slice/crio-e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d WatchSource:0}: Error finding container e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d: Status 404 returned error can't find the container with id e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.266773 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.268498 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.275328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.277343 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.286351 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.287822 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288114 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288214 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288291 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288116 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288497 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q4549" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.289035 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.300101 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.312255 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.321799 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443843 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443881 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443908 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444014 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444074 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444243 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444272 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444509 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444529 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444566 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444701 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444723 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444775 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444794 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444816 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444835 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444921 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445010 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445095 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445160 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546270 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546315 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546346 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546409 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546496 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546514 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546583 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546671 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546727 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546761 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546880 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547007 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547069 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547505 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548422 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548936 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.549382 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.551582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.552432 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.552470 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.554353 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.555255 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.555281 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e84612870a5c0c4830950c12b2fd6510f31530f3fd62287fde6ecf77067364b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.566006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567281 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567337 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ec55be154a66d09157b0ca2623a596d4c9f6b8adde5f16f336c822c2282072f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567684 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568794 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568955 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.569315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.569698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.571364 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.572187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.572826 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.574358 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.574843 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.575104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.575756 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.576190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.576369 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578231 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578318 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34bf333d34756f1b83dde2eb30c2397a83048a027d2708516d2de7b96e990e99/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578726 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.582292 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.583955 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.590739 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.620902 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.624123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.630200 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.634818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.639682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.640274 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerStarted","Data":"e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d"} Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.683512 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.687208 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.693837 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694137 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694282 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694396 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fl6zh" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694347 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.695639 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.695820 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.713850 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751501 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751574 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853291 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854172 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854447 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.856023 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.856052 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d92d749e8b6234664dd57319b2b5b7962d8bfa8dc2f0d92cbae41209d539d4c4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.857381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.859233 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.859576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.860043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.866512 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.867666 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.876816 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.877806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.903287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.909510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.925339 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.018657 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.843238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.846363 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.851247 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mz4lj" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.852481 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853138 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853265 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853565 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.859654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982670 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982735 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982798 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983179 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085146 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085168 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085240 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085357 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085730 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.086139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.086428 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.087540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091663 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091684 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dbe06434949e9ae5912d882f776373a15677e014e71bfee6b8a0dccace93f9b2/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.102662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.126793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.176980 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.303535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.306218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328483 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tbn4b" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328679 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328824 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328963 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.341592 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409961 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511159 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.513051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.513455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.515311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.515628 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.516161 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.516190 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69c22e915c983652805b9208a0c4a6ac775f245fa6a304f939ec9cd0ce7f310a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.523236 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.524119 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.534611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.602074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.653927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.763636 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.764943 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.780468 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781046 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781253 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781400 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ztsdz" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.819636 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820410 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820778 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931617 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.935391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.936460 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.937692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.939080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.965922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:03 crc kubenswrapper[4907]: I0127 18:26:03.086665 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:26:03 crc kubenswrapper[4907]: I0127 18:26:03.684976 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerStarted","Data":"6f8372a96157a4b3bb9b594a1bb14b4dea21ae1a28e8793346ac6d1505a183aa"} Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.464334 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.465590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.468014 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dg7j2" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.478084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.569758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.673333 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.723530 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.795713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.183950 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.187143 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.192938 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.194372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-xwgj8" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.197943 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.294192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.294384 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.395891 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.396011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: E0127 18:26:05.396236 4907 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 27 18:26:05 crc kubenswrapper[4907]: E0127 18:26:05.396321 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert podName:6ccb4875-977f-4fea-b3fa-8a4e4ba5a874 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:05.89629876 +0000 UTC m=+1221.025581372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert") pod "observability-ui-dashboards-66cbf594b5-s824m" (UID: "6ccb4875-977f-4fea-b3fa-8a4e4ba5a874") : secret "observability-ui-dashboards" not found Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.415757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.468647 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.470452 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.491450 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525158 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525258 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525333 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525404 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525432 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627425 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627592 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627742 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627808 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627896 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.628919 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.628947 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.629204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.629255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.646397 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.650590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.655987 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.751970 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.769287 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.792259 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.792467 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793060 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793386 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.794730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.794863 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.800307 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.807199 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.829241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844302 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844438 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844785 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.845050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.845198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949025 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949072 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949097 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949200 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949222 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.950261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.950487 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.951119 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.954199 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.954347 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.955755 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.955786 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.957007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.961854 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.969226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.969722 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.979332 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.994374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.107434 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.113983 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.406377 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.400328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.401671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.405545 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-brfrw" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.406164 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.408338 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.411833 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.417069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.441400 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.477203 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497601 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498062 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498316 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498364 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498815 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.599967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.600276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.600567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602094 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601380 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602405 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602972 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603154 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603744 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603954 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.604528 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.604704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.608203 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.608413 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.618768 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.619818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.719205 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.723043 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.725028 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728155 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728447 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728727 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728788 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8dj9z" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728965 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.758823 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.758986 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808489 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808570 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808625 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809126 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809369 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911632 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911672 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911753 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911794 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.913018 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.913264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917165 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917220 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac3443108e8bcdcc20f6b358fb921b68da41b27927115ae8c43a5ccab21823c7/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.918135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.921183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.928795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.936488 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.959036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:09 crc kubenswrapper[4907]: I0127 18:26:09.051358 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.115310 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.125830 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.134001 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.134909 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.135118 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.135536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qbnbr" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.153120 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182893 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182933 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182957 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183034 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284790 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284975 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286792 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286932 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.289687 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.289716 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfc67f3587034b5cebb2487e0145ff94b00c6aece8013bfb7f24f752b6cedc8b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.290588 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.292961 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.293431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.302630 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.321251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.456901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.956703 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.957369 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb5dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pgtv9_openstack(145e21b1-c3a2-4057-a5e0-07e7d4196563): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.958646 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" podUID="145e21b1-c3a2-4057-a5e0-07e7d4196563" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.054328 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.054504 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg5f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pb7f4_openstack(8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.055798 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" podUID="8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.788786 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.799202 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.824449 4907 generic.go:334] "Generic (PLEG): container finished" podID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerID="912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf" exitCode=0 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.824637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf"} Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.825932 4907 generic.go:334] "Generic (PLEG): container finished" podID="e10199f9-f072-4566-ad76-a99c49596214" containerID="7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34" exitCode=0 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.826038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34"} Jan 27 18:26:13 crc kubenswrapper[4907]: W0127 18:26:13.834312 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d050d2_eeb4_4603_a6c4_1cbdd454ea35.slice/crio-0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea WatchSource:0}: Error finding container 0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea: Status 404 returned error can't find the container with id 0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea Jan 27 18:26:13 crc kubenswrapper[4907]: W0127 18:26:13.840583 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52cb02a9_7a60_4761_9770_a9b6910f1088.slice/crio-ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40 WatchSource:0}: Error finding container ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40: Status 404 returned error can't find the container with id ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.975994 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.993250 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.098030 4907 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 18:26:14 crc kubenswrapper[4907]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:26:14 crc kubenswrapper[4907]: > podSandboxID="e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d" Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.098182 4907 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 18:26:14 crc kubenswrapper[4907]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpr4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zqddl_openstack(e10199f9-f072-4566-ad76-a99c49596214): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:26:14 crc kubenswrapper[4907]: > logger="UnhandledError" Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.099374 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podUID="e10199f9-f072-4566-ad76-a99c49596214" Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.840201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.843597 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"3251ec13ecf2d816f4249d2d95826865dd55f4c6e4f346e728a8820870c8122f"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.845075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.847890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerStarted","Data":"d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.849062 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.851047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"f438ce9452f05a2c33576c461be5d8342246dc4a389096e1ff8d110a343a2c82"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.879125 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" podStartSLOduration=7.191630988 podStartE2EDuration="16.87910622s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:03.457622845 +0000 UTC m=+1218.586905457" lastFinishedPulling="2026-01-27 18:26:13.145098077 +0000 UTC m=+1228.274380689" observedRunningTime="2026-01-27 18:26:14.870364999 +0000 UTC m=+1229.999647631" watchObservedRunningTime="2026-01-27 18:26:14.87910622 +0000 UTC m=+1230.008388832" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.084143 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.121856 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.130129 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.135151 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.146059 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.149689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174621 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174716 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"145e21b1-c3a2-4057-a5e0-07e7d4196563\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174869 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"145e21b1-c3a2-4057-a5e0-07e7d4196563\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174986 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.175988 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config" (OuterVolumeSpecName: "config") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.176368 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.177369 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config" (OuterVolumeSpecName: "config") pod "145e21b1-c3a2-4057-a5e0-07e7d4196563" (UID: "145e21b1-c3a2-4057-a5e0-07e7d4196563"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.182840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz" (OuterVolumeSpecName: "kube-api-access-hb5dz") pod "145e21b1-c3a2-4057-a5e0-07e7d4196563" (UID: "145e21b1-c3a2-4057-a5e0-07e7d4196563"). InnerVolumeSpecName "kube-api-access-hb5dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.183162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8" (OuterVolumeSpecName: "kube-api-access-jg5f8") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "kube-api-access-jg5f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.278595 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279075 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279094 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279110 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279123 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.337062 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.366863 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.376844 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.384870 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.578810 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: W0127 18:26:15.599473 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7185e8ed_9479_43cc_814b_cfcd26e548a5.slice/crio-999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e WatchSource:0}: Error finding container 999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e: Status 404 returned error can't find the container with id 999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.867312 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerStarted","Data":"3673b3443d4ba1d7f90e11d19590b6b725d3fd74d821289ba3dea4614690e212"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.869282 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.869346 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" event={"ID":"145e21b1-c3a2-4057-a5e0-07e7d4196563","Type":"ContainerDied","Data":"652e65a79ff5cb6213aeb319351d17981ccbb6f22938bc88043e8eeb5ebe6be2"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.877302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" event={"ID":"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3","Type":"ContainerDied","Data":"dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.877322 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.892564 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"956c237313205063c524040dbf960d0b2cac134f53ea06e957d78656bbf34f54"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.894099 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" event={"ID":"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874","Type":"ContainerStarted","Data":"a54cb68638961281cd1c54d4a92084b9e4c3140b27b54ce9b0a6937b829acc51"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.897886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.909664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"407bf5df-e69a-49ae-ac93-858be78d98a0","Type":"ContainerStarted","Data":"4920ff6ff351a12d39c7685db263fbb989815ef22fa582507cdc66281c7dc4ed"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.913645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.913694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"a663a2023da5a32f6bd5f0183836e1e7fbb066f93180133dd07965669a43de50"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.918182 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerStarted","Data":"5bf7de1b06e0edb9333c802c1f12063b0b99ab1bc4bdfce12c369626362c9e4b"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.921814 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz" event={"ID":"daaea3c0-a88d-442f-be06-bb95b2825fcc","Type":"ContainerStarted","Data":"35a118a32372ba6e550f32dc8677672a4c6b740670e8816571c1cb5981cd69d2"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.923540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"1d837f51b3b5b34571907a5beab1539e68355f711343794293643ca8d2f93b93"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.927906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerStarted","Data":"f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.928720 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.059747 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.069609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.070708 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podStartSLOduration=3.964041906 podStartE2EDuration="18.070691577s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:25:59.225108389 +0000 UTC m=+1214.354391001" lastFinishedPulling="2026-01-27 18:26:13.33175806 +0000 UTC m=+1228.461040672" observedRunningTime="2026-01-27 18:26:16.051530487 +0000 UTC m=+1231.180813099" watchObservedRunningTime="2026-01-27 18:26:16.070691577 +0000 UTC m=+1231.199974189" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.111670 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.120226 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.125040 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b674f54c6-zhrj9" podStartSLOduration=11.125026845 podStartE2EDuration="11.125026845s" podCreationTimestamp="2026-01-27 18:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:16.095524919 +0000 UTC m=+1231.224807531" watchObservedRunningTime="2026-01-27 18:26:16.125026845 +0000 UTC m=+1231.254309447" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.331317 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.474003 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:17 crc kubenswrapper[4907]: W0127 18:26:17.308244 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32811f4d_c205_437d_a06c_ac4fff30cead.slice/crio-8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832 WatchSource:0}: Error finding container 8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832: Status 404 returned error can't find the container with id 8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832 Jan 27 18:26:17 crc kubenswrapper[4907]: W0127 18:26:17.314684 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e5e512_03ab_41c7_8cde_1e20d1f72d0d.slice/crio-34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9 WatchSource:0}: Error finding container 34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9: Status 404 returned error can't find the container with id 34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9 Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.761612 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145e21b1-c3a2-4057-a5e0-07e7d4196563" path="/var/lib/kubelet/pods/145e21b1-c3a2-4057-a5e0-07e7d4196563/volumes" Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.764151 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" path="/var/lib/kubelet/pods/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3/volumes" Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.944608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9"} Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.945815 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832"} Jan 27 18:26:23 crc kubenswrapper[4907]: I0127 18:26:23.485824 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:23 crc kubenswrapper[4907]: I0127 18:26:23.969467 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:24 crc kubenswrapper[4907]: I0127 18:26:24.039243 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:24 crc kubenswrapper[4907]: I0127 18:26:24.043026 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" containerID="cri-o://f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" gracePeriod=10 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.054618 4907 generic.go:334] "Generic (PLEG): container finished" podID="e10199f9-f072-4566-ad76-a99c49596214" containerID="f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" exitCode=0 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.054691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.058253 4907 generic.go:334] "Generic (PLEG): container finished" podID="89e5e512-03ab-41c7-8cde-1e20d1f72d0d" containerID="83179941e673cfd9fba83014df5d73487d53c0b1a94a5def74609dda5504f985" exitCode=0 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.058330 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerDied","Data":"83179941e673cfd9fba83014df5d73487d53c0b1a94a5def74609dda5504f985"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.062399 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.064629 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz" event={"ID":"daaea3c0-a88d-442f-be06-bb95b2825fcc","Type":"ContainerStarted","Data":"5b92df5545c5ea8c056c2f10cd2cdf8a4bb604d3ef980b6dbe0c89ff124e7045"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.064767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-96prz" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.141580 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz" podStartSLOduration=9.289736801 podStartE2EDuration="17.141547773s" podCreationTimestamp="2026-01-27 18:26:08 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.359510919 +0000 UTC m=+1230.488793531" lastFinishedPulling="2026-01-27 18:26:23.211321891 +0000 UTC m=+1238.340604503" observedRunningTime="2026-01-27 18:26:25.115906357 +0000 UTC m=+1240.245188989" watchObservedRunningTime="2026-01-27 18:26:25.141547773 +0000 UTC m=+1240.270830385" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.517763 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616570 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616744 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616956 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.622454 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d" (OuterVolumeSpecName: "kube-api-access-jpr4d") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "kube-api-access-jpr4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.724520 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.829844 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.830499 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.838825 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081135 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d"} Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081149 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081295 4907 scope.go:117] "RemoveContainer" containerID="f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.084835 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6"} Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.089163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.120356 4907 scope.go:117] "RemoveContainer" containerID="7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.189020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.283313 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.355060 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.982787 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config" (OuterVolumeSpecName: "config") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.986071 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.189666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" podStartSLOduration=14.91194422 podStartE2EDuration="23.189649627s" podCreationTimestamp="2026-01-27 18:26:05 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.131932721 +0000 UTC m=+1230.261215333" lastFinishedPulling="2026-01-27 18:26:23.409638098 +0000 UTC m=+1238.538920740" observedRunningTime="2026-01-27 18:26:28.187290669 +0000 UTC m=+1243.316573281" watchObservedRunningTime="2026-01-27 18:26:28.189649627 +0000 UTC m=+1243.318932239" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.246742 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.189278363 podStartE2EDuration="26.246726104s" podCreationTimestamp="2026-01-27 18:26:02 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.15385717 +0000 UTC m=+1230.283139782" lastFinishedPulling="2026-01-27 18:26:23.211304901 +0000 UTC m=+1238.340587523" observedRunningTime="2026-01-27 18:26:28.24207298 +0000 UTC m=+1243.371355602" watchObservedRunningTime="2026-01-27 18:26:28.246726104 +0000 UTC m=+1243.376008716" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.347995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348303 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"731d33de975db175e8d1cd57a0c798460c5494dff18b12d28a0dbfb0b3819d7a"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348321 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348331 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"e7dbc577c77fbfea482195487976749b0cc3192db114070a33afd27833b407e1"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348343 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"1eff7c3a7c24361045db808712147763696305ae92862383f68d7f81ed7ad178"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348375 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348394 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" event={"ID":"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874","Type":"ContainerStarted","Data":"13f9d2172cd6b14bbff4bf83ffadbbdaab9c2902a883c61569ab7017d03a58b8"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348405 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"407bf5df-e69a-49ae-ac93-858be78d98a0","Type":"ContainerStarted","Data":"4a4deadd6f20a2b4edaf80cef580afa7e37101f95af38ed4ed51e80b20296292"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.386458 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.397252 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"e464b2df2751ae75bd57453f56d51a3f252e84a15a2ac488852d31b7a961a145"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130773 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.132653 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerStarted","Data":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.132708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.134378 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerStarted","Data":"ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.134447 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" containerID="cri-o://ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" gracePeriod=600 Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.135517 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.159162 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2q6jk" podStartSLOduration=15.275794088 podStartE2EDuration="21.159143443s" podCreationTimestamp="2026-01-27 18:26:08 +0000 UTC" firstStartedPulling="2026-01-27 18:26:17.328197283 +0000 UTC m=+1232.457479895" lastFinishedPulling="2026-01-27 18:26:23.211546618 +0000 UTC m=+1238.340829250" observedRunningTime="2026-01-27 18:26:29.149657321 +0000 UTC m=+1244.278939943" watchObservedRunningTime="2026-01-27 18:26:29.159143443 +0000 UTC m=+1244.288426065" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.179983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.690520177 podStartE2EDuration="25.17995103s" podCreationTimestamp="2026-01-27 18:26:04 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.347902126 +0000 UTC m=+1230.477184738" lastFinishedPulling="2026-01-27 18:26:25.837332979 +0000 UTC m=+1240.966615591" observedRunningTime="2026-01-27 18:26:29.173451274 +0000 UTC m=+1244.302733886" watchObservedRunningTime="2026-01-27 18:26:29.17995103 +0000 UTC m=+1244.309233652" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.762581 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10199f9-f072-4566-ad76-a99c49596214" path="/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volumes" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.156449 4907 generic.go:334] "Generic (PLEG): container finished" podID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerID="994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906" exitCode=0 Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.156764 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerDied","Data":"994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906"} Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.847596 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:31 crc kubenswrapper[4907]: E0127 18:26:31.848200 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848217 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: E0127 18:26:31.848238 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="init" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848244 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="init" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848413 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.849057 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.854829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.870725 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874752 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874822 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.875013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.875204 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986449 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986600 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986723 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986811 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.987065 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.987773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.993494 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.998841 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.012217 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.024861 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.028101 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.032451 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.036582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088996 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.173864 4907 generic.go:334] "Generic (PLEG): container finished" podID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerID="644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b" exitCode=0 Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.173920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerDied","Data":"644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b"} Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.178859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.190825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.190900 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191022 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191767 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191811 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.192008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.221744 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: E0127 18:26:32.222593 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ggs7z], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" podUID="b1da3ecb-de7c-4586-b873-8c837b0bb690" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.224308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.233762 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.236299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.239402 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.252982 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395860 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395922 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395987 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.396055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497840 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497942 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498284 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498361 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498852 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.499444 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.532616 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.610329 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.090704 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.181227 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.191366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.313895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314187 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314267 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314616 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config" (OuterVolumeSpecName: "config") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315097 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315117 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315127 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.333627 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z" (OuterVolumeSpecName: "kube-api-access-ggs7z") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "kube-api-access-ggs7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.416945 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.190863 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.237237 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.246906 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.689508 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.742756 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.744824 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.764656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.805452 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843189 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843458 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843596 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946008 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946137 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946238 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947231 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947368 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.972585 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.065045 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.202040 4907 generic.go:334] "Generic (PLEG): container finished" podID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerID="ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" exitCode=0 Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.202086 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerDied","Data":"ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03"} Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.765065 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1da3ecb-de7c-4586-b873-8c837b0bb690" path="/var/lib/kubelet/pods/b1da3ecb-de7c-4586-b873-8c837b0bb690/volumes" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.005085 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176314 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176637 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176930 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177115 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177152 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177176 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177245 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177323 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177395 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177416 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177907 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.183072 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm" (OuterVolumeSpecName: "kube-api-access-6b9vm") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "kube-api-access-6b9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.183219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186347 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186374 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186699 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.189903 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out" (OuterVolumeSpecName: "config-out") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.194787 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config" (OuterVolumeSpecName: "web-config") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.195764 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config" (OuterVolumeSpecName: "config") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.207447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "pvc-f7807fd9-6025-4711-8134-26e284a305f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214178 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerDied","Data":"5bf7de1b06e0edb9333c802c1f12063b0b99ab1bc4bdfce12c369626362c9e4b"} Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214228 4907 scope.go:117] "RemoveContainer" containerID="ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214362 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.218358 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.219007 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.219032 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.219223 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.225198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.226239 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.231903 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232013 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232241 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232431 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fwnt8" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289429 4907 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289471 4907 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289485 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289530 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289544 4907 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289576 4907 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289622 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289734 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.363595 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.363764 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7807fd9-6025-4711-8134-26e284a305f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6") on node "crc" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.387623 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.395890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.395985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396194 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396213 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396300 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.411258 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.428061 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.430622 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.432725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436750 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436828 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436955 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437053 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437117 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.441924 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.442879 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.462874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501572 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501594 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501647 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:37.001628588 +0000 UTC m=+1252.130911200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501821 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501957 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502568 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.504534 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.504600 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/baac55f4b52b016f489a503d203379da7c075c77f069958efa80bd975760c269/globalmount\"" pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.511371 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.530299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.557516 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.603986 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604118 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604430 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604755 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604909 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604941 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.605050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.669610 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.677161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:36 crc kubenswrapper[4907]: W0127 18:26:36.678697 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6ab393_1e13_4683_81ae_6e28d9261d30.slice/crio-fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0 WatchSource:0}: Error finding container fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0: Status 404 returned error can't find the container with id fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0 Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.706962 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707024 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707052 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707081 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707535 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707628 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707709 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.710393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.710787 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.711295 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.711345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.714354 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.714415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.716607 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.716637 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.717520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.717788 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.724839 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.785708 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.841763 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.843491 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.845774 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.845978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.846073 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.879161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014760 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014919 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.015013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015232 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015255 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015297 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:38.01528065 +0000 UTC m=+1253.144563282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.060789 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117445 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117791 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117871 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118521 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.126799 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.127018 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.127279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.147155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.154031 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.155737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.168856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.184755 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.233727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"ec50a24abfc0c0adfac3c4f21b0754b6c82db16ffad97b48fb17b61de62c590f"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238490 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerID="9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a" exitCode=0 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238549 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerStarted","Data":"14af702ae586c0a32f8c72ffae79c9a4feed72f954b871e63a6bcedfd4970e82"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243350 4907 generic.go:334] "Generic (PLEG): container finished" podID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerID="74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8" exitCode=0 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerDied","Data":"74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerStarted","Data":"ec2385fbeb3a3eca6fc6a574c69026c728ef33238e2a6408701718a795124c05"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.268389 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.814410728 podStartE2EDuration="26.268367969s" podCreationTimestamp="2026-01-27 18:26:11 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.601806388 +0000 UTC m=+1230.731089000" lastFinishedPulling="2026-01-27 18:26:36.055763629 +0000 UTC m=+1251.185046241" observedRunningTime="2026-01-27 18:26:37.260525214 +0000 UTC m=+1252.389807826" watchObservedRunningTime="2026-01-27 18:26:37.268367969 +0000 UTC m=+1252.397650581" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.282156 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxkhc" event={"ID":"af6ab393-1e13-4683-81ae-6e28d9261d30","Type":"ContainerStarted","Data":"fbd0e86a97b507a00f4029b3fee02cab6c91e482fa73b49fa373e195f393dc8b"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.282202 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxkhc" event={"ID":"af6ab393-1e13-4683-81ae-6e28d9261d30","Type":"ContainerStarted","Data":"fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.288094 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.292243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"75d4a84af76842834e842b929a1d5d5a1f48c55a0bd8e730ac57db71f2d6f431"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.297204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.344284 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.71478229 podStartE2EDuration="38.344258596s" podCreationTimestamp="2026-01-27 18:25:59 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.129627045 +0000 UTC m=+1230.258909657" lastFinishedPulling="2026-01-27 18:26:22.759103351 +0000 UTC m=+1237.888385963" observedRunningTime="2026-01-27 18:26:37.337483441 +0000 UTC m=+1252.466766053" watchObservedRunningTime="2026-01-27 18:26:37.344258596 +0000 UTC m=+1252.473541208" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.401637 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.344406218 podStartE2EDuration="36.401610601s" podCreationTimestamp="2026-01-27 18:26:01 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.154291283 +0000 UTC m=+1230.283573895" lastFinishedPulling="2026-01-27 18:26:23.211495666 +0000 UTC m=+1238.340778278" observedRunningTime="2026-01-27 18:26:37.364528947 +0000 UTC m=+1252.493811549" watchObservedRunningTime="2026-01-27 18:26:37.401610601 +0000 UTC m=+1252.530893243" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.419391 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.671493291 podStartE2EDuration="30.41936783s" podCreationTimestamp="2026-01-27 18:26:07 +0000 UTC" firstStartedPulling="2026-01-27 18:26:17.311040691 +0000 UTC m=+1232.440323303" lastFinishedPulling="2026-01-27 18:26:36.05891523 +0000 UTC m=+1251.188197842" observedRunningTime="2026-01-27 18:26:37.390267335 +0000 UTC m=+1252.519549947" watchObservedRunningTime="2026-01-27 18:26:37.41936783 +0000 UTC m=+1252.548650452" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.458350 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.465604 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jxkhc" podStartSLOduration=6.465583475 podStartE2EDuration="6.465583475s" podCreationTimestamp="2026-01-27 18:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:37.408297922 +0000 UTC m=+1252.537580534" watchObservedRunningTime="2026-01-27 18:26:37.465583475 +0000 UTC m=+1252.594866087" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.681733 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:37 crc kubenswrapper[4907]: W0127 18:26:37.693919 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d384d2_43f4_4290_837f_fb784fc28b37.slice/crio-fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68 WatchSource:0}: Error finding container fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68: Status 404 returned error can't find the container with id fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.807011 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" path="/var/lib/kubelet/pods/f21c64d8-b95d-460b-a32f-1498c725d8e8/volumes" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.867181 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941617 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941907 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.942093 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.942244 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.975173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7" (OuterVolumeSpecName: "kube-api-access-dj6q7") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "kube-api-access-dj6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.982869 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.003231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.013661 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.024928 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.041085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config" (OuterVolumeSpecName: "config") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045065 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045667 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045765 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045871 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045963 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.046046 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046229 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046351 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046462 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:40.046443535 +0000 UTC m=+1255.175726147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.308880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerStarted","Data":"2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.309053 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.310885 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.310873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerDied","Data":"ec2385fbeb3a3eca6fc6a574c69026c728ef33238e2a6408701718a795124c05"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.311193 4907 scope.go:117] "RemoveContainer" containerID="74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.312097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerStarted","Data":"78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.315294 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.338089 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" podStartSLOduration=4.338073419 podStartE2EDuration="4.338073419s" podCreationTimestamp="2026-01-27 18:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:38.332854089 +0000 UTC m=+1253.462136721" watchObservedRunningTime="2026-01-27 18:26:38.338073419 +0000 UTC m=+1253.467356021" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.435037 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.445359 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.052001 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.052267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.101694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.370665 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.457518 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: E0127 18:26:39.468047 4907 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:45994->38.102.83.184:45697: write tcp 38.102.83.184:45994->38.102.83.184:45697: write: broken pipe Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.509583 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.762848 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" path="/var/lib/kubelet/pods/ebc9482a-0b0e-48d7-8409-83be16d41469/volumes" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.094052 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094242 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094479 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094525 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:44.094509487 +0000 UTC m=+1259.223792099 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.390508 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.656319 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.657011 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.657081 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.657339 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.658836 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.660881 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.661316 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.661545 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.662374 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8tg87" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.689161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814338 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814409 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814531 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814664 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814704 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814998 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.815065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916969 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.917054 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918116 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918285 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.923850 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.924330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.925399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.936718 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.981520 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.178142 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.178195 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.354916 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e"} Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.987545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:41 crc kubenswrapper[4907]: W0127 18:26:41.989519 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f5ec64_0863_45ef_9090_4768ecd34667.slice/crio-b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d WatchSource:0}: Error finding container b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d: Status 404 returned error can't find the container with id b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.366174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerStarted","Data":"ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b"} Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.367461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d"} Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.389750 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-m9rr7" podStartSLOduration=2.7740508139999998 podStartE2EDuration="6.389731587s" podCreationTimestamp="2026-01-27 18:26:36 +0000 UTC" firstStartedPulling="2026-01-27 18:26:37.978015362 +0000 UTC m=+1253.107297974" lastFinishedPulling="2026-01-27 18:26:41.593696125 +0000 UTC m=+1256.722978747" observedRunningTime="2026-01-27 18:26:42.38950981 +0000 UTC m=+1257.518792433" watchObservedRunningTime="2026-01-27 18:26:42.389731587 +0000 UTC m=+1257.519014209" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.654313 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.654709 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.791459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.471508 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.771448 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.852413 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.195389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195546 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195563 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195635 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:52.195596932 +0000 UTC m=+1267.324879544 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"f956f9ac8e59b89fd0166993283fa50a28c2a71b225081f9bd124cf61f3864e4"} Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389395 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"2200498760d91a27012199a92a489237ade2905e1a6c76991c1e31ca7468d9f8"} Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.421923 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.24821637 podStartE2EDuration="4.421905063s" podCreationTimestamp="2026-01-27 18:26:40 +0000 UTC" firstStartedPulling="2026-01-27 18:26:41.992960777 +0000 UTC m=+1257.122243389" lastFinishedPulling="2026-01-27 18:26:43.16664946 +0000 UTC m=+1258.295932082" observedRunningTime="2026-01-27 18:26:44.40993122 +0000 UTC m=+1259.539213842" watchObservedRunningTime="2026-01-27 18:26:44.421905063 +0000 UTC m=+1259.551187675" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.656975 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.658361 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.665861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.705147 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.705226 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.777702 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.784539 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.791109 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.808058 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.808630 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.810893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.811414 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.841965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.910814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.911047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.977681 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.014483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.014950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.016146 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.035019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.066711 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.112075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.127622 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.127877 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" containerID="cri-o://d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" gracePeriod=10 Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.432039 4907 generic.go:334] "Generic (PLEG): container finished" podID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerID="d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" exitCode=0 Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.432130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5"} Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.808394 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.882775 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.886773 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.969145 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.046911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.047031 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.047077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.053888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896" (OuterVolumeSpecName: "kube-api-access-52896") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "kube-api-access-52896". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.100609 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.118081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config" (OuterVolumeSpecName: "config") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149389 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149422 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149432 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440530 4907 generic.go:334] "Generic (PLEG): container finished" podID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerID="4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16" exitCode=0 Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerDied","Data":"4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440640 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerStarted","Data":"e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"6f8372a96157a4b3bb9b594a1bb14b4dea21ae1a28e8793346ac6d1505a183aa"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445085 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445130 4907 scope.go:117] "RemoveContainer" containerID="d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449080 4907 generic.go:334] "Generic (PLEG): container finished" podID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerID="44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7" exitCode=0 Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerDied","Data":"44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449167 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerStarted","Data":"fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.480132 4907 scope.go:117] "RemoveContainer" containerID="912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.511967 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.522157 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.460498 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e" exitCode=0 Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.460574 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e"} Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.759764 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" path="/var/lib/kubelet/pods/bfcec505-2d02-4a43-ae48-0861df2f3f03/volumes" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.092753 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.099492 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.101271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"5edef5c0-5919-4ddd-93cd-65b569c78603\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.101394 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"5edef5c0-5919-4ddd-93cd-65b569c78603\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.102519 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5edef5c0-5919-4ddd-93cd-65b569c78603" (UID: "5edef5c0-5919-4ddd-93cd-65b569c78603"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.103885 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.109539 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c" (OuterVolumeSpecName: "kube-api-access-pmz2c") pod "5edef5c0-5919-4ddd-93cd-65b569c78603" (UID: "5edef5c0-5919-4ddd-93cd-65b569c78603"). InnerVolumeSpecName "kube-api-access-pmz2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205092 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205362 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205867 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.206016 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" (UID: "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.208849 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t" (OuterVolumeSpecName: "kube-api-access-j7m6t") pod "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" (UID: "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5"). InnerVolumeSpecName "kube-api-access-j7m6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.307813 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.307846 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.471286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerDied","Data":"e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea"} Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.472789 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.471351 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerDied","Data":"fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579"} Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473287 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473291 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.484330 4907 generic.go:334] "Generic (PLEG): container finished" podID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerID="ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b" exitCode=0 Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.484433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerDied","Data":"ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b"} Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.903820 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904674 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904696 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904712 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904720 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904736 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="init" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904742 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="init" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904777 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904782 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904962 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905009 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905018 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905772 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.911802 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.926026 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.961346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.961479 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.063394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.063513 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.064415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.091214 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.176870 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.178752 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.189440 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.227327 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.267352 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.267493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.290485 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.292500 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.296818 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.301385 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374776 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.375741 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.395536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.477426 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.477561 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.478437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.501725 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.507373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.675031 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.733771 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.909728 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990773 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991017 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991073 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991174 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.992201 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.992835 4907 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.993304 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.998044 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5" (OuterVolumeSpecName: "kube-api-access-qfpt5") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "kube-api-access-qfpt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.017882 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.033327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts" (OuterVolumeSpecName: "scripts") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.039501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.040664 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: W0127 18:26:51.054927 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1662136_4082_412a_9846_92ea9aff9350.slice/crio-03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7 WatchSource:0}: Error finding container 03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7: Status 404 returned error can't find the container with id 03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.060293 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:51 crc kubenswrapper[4907]: W0127 18:26:51.071552 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0adeee4_a225_49f2_8a87_f44aa772d5f2.slice/crio-0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892 WatchSource:0}: Error finding container 0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892: Status 404 returned error can't find the container with id 0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.072244 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094380 4907 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094409 4907 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094419 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094429 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094437 4907 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094444 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.501858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerDied","Data":"78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.502245 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.502308 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.505603 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerStarted","Data":"e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.505642 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerStarted","Data":"03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.509968 4907 generic.go:334] "Generic (PLEG): container finished" podID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerID="bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c" exitCode=0 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.510047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerDied","Data":"bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.510074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerStarted","Data":"0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512511 4907 generic.go:334] "Generic (PLEG): container finished" podID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerID="3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371" exitCode=0 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerDied","Data":"3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512618 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerStarted","Data":"742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233"} Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.222273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.243493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.244070 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.525954 4907 generic.go:334] "Generic (PLEG): container finished" podID="e1662136-4082-412a-9846-92ea9aff9350" containerID="e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b" exitCode=0 Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.526021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerDied","Data":"e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b"} Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.665367 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:52 crc kubenswrapper[4907]: E0127 18:26:52.665855 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.665866 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.666052 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.666722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.679828 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.732894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.732936 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.779898 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.781112 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.783658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.798648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834613 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834784 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834934 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.836338 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.858205 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.938596 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.938707 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.942598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.956883 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.988454 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.988996 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.989401 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-65dccccccb-km74l" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" containerID="cri-o://73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" gracePeriod=14 Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.990048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.001350 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.044885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.045038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.074480 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.077175 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.079100 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.086637 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.106491 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.147364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.147453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.148861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.148947 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.149081 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.167080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.259120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.259557 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.260450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.281624 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.325549 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.346801 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.348238 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.363489 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.409199 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.444379 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.445997 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.447962 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.457501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.462383 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.462471 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539061 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539733 4907 generic.go:334] "Generic (PLEG): container finished" podID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerID="73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" exitCode=2 Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerDied","Data":"73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45"} Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564171 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564380 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.565615 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.586590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.666399 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.666544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.667430 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.681930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.682889 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.769484 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.561966 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerDied","Data":"0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.562284 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.565453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerDied","Data":"742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.565531 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.567133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerDied","Data":"03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.567158 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.630441 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.664828 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.675881 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.692139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.692231 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.693533 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0adeee4-a225-49f2-8a87-f44aa772d5f2" (UID: "a0adeee4-a225-49f2-8a87-f44aa772d5f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.703282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm" (OuterVolumeSpecName: "kube-api-access-wqtnm") pod "a0adeee4-a225-49f2-8a87-f44aa772d5f2" (UID: "a0adeee4-a225-49f2-8a87-f44aa772d5f2"). InnerVolumeSpecName "kube-api-access-wqtnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.796679 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.796984 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797003 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"e1662136-4082-412a-9846-92ea9aff9350\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"e1662136-4082-412a-9846-92ea9aff9350\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797640 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797668 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.798030 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1662136-4082-412a-9846-92ea9aff9350" (UID: "e1662136-4082-412a-9846-92ea9aff9350"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.800933 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z" (OuterVolumeSpecName: "kube-api-access-dnz9z") pod "e1662136-4082-412a-9846-92ea9aff9350" (UID: "e1662136-4082-412a-9846-92ea9aff9350"). InnerVolumeSpecName "kube-api-access-dnz9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.805020 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2" (OuterVolumeSpecName: "kube-api-access-ppbs2") pod "6595f747-432a-4afe-ad8c-fd3f44fa85e6" (UID: "6595f747-432a-4afe-ad8c-fd3f44fa85e6"). InnerVolumeSpecName "kube-api-access-ppbs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.805466 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6595f747-432a-4afe-ad8c-fd3f44fa85e6" (UID: "6595f747-432a-4afe-ad8c-fd3f44fa85e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.826759 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.826839 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.899420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.899941 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900093 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900327 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900479 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901006 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config" (OuterVolumeSpecName: "console-config") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901037 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901051 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903191 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903331 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903349 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903361 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903374 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903390 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.904203 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw" (OuterVolumeSpecName: "kube-api-access-bglhw") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "kube-api-access-bglhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.904691 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.905470 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005358 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005398 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005411 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005423 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.581624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb"} Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.583649 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.583744 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586621 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerDied","Data":"d8e779a538fd171e62688ea894409db54be158536da7dede33422f76801c0085"} Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586684 4907 scope.go:117] "RemoveContainer" containerID="73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586711 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586783 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586815 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.631743 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.640092 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.761331 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" path="/var/lib/kubelet/pods/4e69063c-9ede-4474-9fd3-b16db60b9a7c/volumes" Jan 27 18:26:55 crc kubenswrapper[4907]: W0127 18:26:55.819090 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfbf931_f21b_4652_8640_0208df4b40cc.slice/crio-b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712 WatchSource:0}: Error finding container b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712: Status 404 returned error can't find the container with id b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712 Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.840627 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.864306 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.052627 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.068648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.072448 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1904c81_5de8_431a_9304_5b4ba1771c73.slice/crio-c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01 WatchSource:0}: Error finding container c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01: Status 404 returned error can't find the container with id c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01 Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.075572 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c433c1_ca56_4d2d_ac7b_0f2ceadcaf8d.slice/crio-777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea WatchSource:0}: Error finding container 777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea: Status 404 returned error can't find the container with id 777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.081423 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.100415 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.179211 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.185756 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf7e986b_1dca_4795_85f7_e62cdd92d995.slice/crio-1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2 WatchSource:0}: Error finding container 1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2: Status 404 returned error can't find the container with id 1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.273419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.283414 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.594722 4907 generic.go:334] "Generic (PLEG): container finished" podID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerID="4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.594770 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerDied","Data":"4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.595198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerStarted","Data":"777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.598331 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600117 4907 generic.go:334] "Generic (PLEG): container finished" podID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerID="7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600199 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerDied","Data":"7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600232 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerStarted","Data":"17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602425 4907 generic.go:334] "Generic (PLEG): container finished" podID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerID="a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602484 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerDied","Data":"a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerStarted","Data":"246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604495 4907 generic.go:334] "Generic (PLEG): container finished" podID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerID="c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604539 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerDied","Data":"c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604595 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerStarted","Data":"b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613077 4907 generic.go:334] "Generic (PLEG): container finished" podID="d7319b76-e25b-4370-ac3e-641efd764024" containerID="10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613165 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerDied","Data":"10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613203 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerStarted","Data":"81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615852 4907 generic.go:334] "Generic (PLEG): container finished" podID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerID="96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerDied","Data":"96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerStarted","Data":"c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01"} Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.640526 4907 generic.go:334] "Generic (PLEG): container finished" podID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerID="f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6" exitCode=0 Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.640878 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6"} Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.825606 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" path="/var/lib/kubelet/pods/6595f747-432a-4afe-ad8c-fd3f44fa85e6/volumes" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.263407 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.404435 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"3dfbf931-f21b-4652-8640-0208df4b40cc\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.404614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"3dfbf931-f21b-4652-8640-0208df4b40cc\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.409524 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr" (OuterVolumeSpecName: "kube-api-access-n7kjr") pod "3dfbf931-f21b-4652-8640-0208df4b40cc" (UID: "3dfbf931-f21b-4652-8640-0208df4b40cc"). InnerVolumeSpecName "kube-api-access-n7kjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.410024 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dfbf931-f21b-4652-8640-0208df4b40cc" (UID: "3dfbf931-f21b-4652-8640-0208df4b40cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.410870 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.457257 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.492488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506159 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"d7319b76-e25b-4370-ac3e-641efd764024\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506388 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"d7319b76-e25b-4370-ac3e-641efd764024\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506958 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7319b76-e25b-4370-ac3e-641efd764024" (UID: "d7319b76-e25b-4370-ac3e-641efd764024"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507243 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507321 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507450 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.515170 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9" (OuterVolumeSpecName: "kube-api-access-f2pt9") pod "d7319b76-e25b-4370-ac3e-641efd764024" (UID: "d7319b76-e25b-4370-ac3e-641efd764024"). InnerVolumeSpecName "kube-api-access-f2pt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"f1904c81-5de8-431a-9304-5b4ba1771c73\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609811 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609973 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610086 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1904c81-5de8-431a-9304-5b4ba1771c73" (UID: "f1904c81-5de8-431a-9304-5b4ba1771c73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610160 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"f1904c81-5de8-431a-9304-5b4ba1771c73\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef0a2ee-9212-41c9-b2b9-d59602779eef" (UID: "0ef0a2ee-9212-41c9-b2b9-d59602779eef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611052 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611098 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611123 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.614292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl" (OuterVolumeSpecName: "kube-api-access-zh2rl") pod "f1904c81-5de8-431a-9304-5b4ba1771c73" (UID: "f1904c81-5de8-431a-9304-5b4ba1771c73"). InnerVolumeSpecName "kube-api-access-zh2rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.614663 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46" (OuterVolumeSpecName: "kube-api-access-4sh46") pod "0ef0a2ee-9212-41c9-b2b9-d59602779eef" (UID: "0ef0a2ee-9212-41c9-b2b9-d59602779eef"). InnerVolumeSpecName "kube-api-access-4sh46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.660941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerDied","Data":"b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.660980 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.661038 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667459 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerDied","Data":"81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667513 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667606 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673045 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerDied","Data":"c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673123 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673243 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.679301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"7706857a2eda31b255c275fc07115ff6bbb35d1894888292c20b83bc5cda73d4"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685287 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerDied","Data":"246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685331 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685400 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.712948 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.712968 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.738669 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.778591 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96prz" podUID="daaea3c0-a88d-442f-be06-bb95b2825fcc" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:26:58 crc kubenswrapper[4907]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:26:58 crc kubenswrapper[4907]: > Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.801515 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.815450 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.815524 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.816485 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94f0fdef-b14b-4204-be1e-90a5d19c96e7" (UID: "94f0fdef-b14b-4204-be1e-90a5d19c96e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.820168 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c" (OuterVolumeSpecName: "kube-api-access-bdq4c") pod "94f0fdef-b14b-4204-be1e-90a5d19c96e7" (UID: "94f0fdef-b14b-4204-be1e-90a5d19c96e7"). InnerVolumeSpecName "kube-api-access-bdq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.825219 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.828229 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.916880 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.917235 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.917398 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" (UID: "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919929 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919960 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919974 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.922595 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg" (OuterVolumeSpecName: "kube-api-access-82plg") pod "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" (UID: "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d"). InnerVolumeSpecName "kube-api-access-82plg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.022926 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.156907 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157298 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157320 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157327 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157342 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157348 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157366 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157371 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157387 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157393 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157404 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157410 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157422 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157428 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157441 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157452 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157458 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157465 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157471 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157662 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157671 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157690 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157702 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157714 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157726 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157735 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157746 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157757 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.158648 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.162064 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.172030 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331134 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331447 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331502 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433854 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.434038 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.434080 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.435900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.435979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.436034 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.436280 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.437387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.458205 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.480445 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.699805 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.700265 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702881 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerDied","Data":"777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702924 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702980 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.717764 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"f6a8c2a03beefa353abd26d99a6f91a1eb474d911b85150474113b8b7404af6e"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.722231 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerDied","Data":"17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.722473 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.723003 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.744832 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.751494 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.120337723 podStartE2EDuration="1m1.751471758s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.994522928 +0000 UTC m=+1229.123805540" lastFinishedPulling="2026-01-27 18:26:22.625656963 +0000 UTC m=+1237.754939575" observedRunningTime="2026-01-27 18:26:59.734542913 +0000 UTC m=+1274.863825525" watchObservedRunningTime="2026-01-27 18:26:59.751471758 +0000 UTC m=+1274.880754380" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.910917 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.913195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.916568 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.928283 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.977175 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.056041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.056148 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.157731 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.158188 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.159112 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.177988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.272769 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.395907 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.398740 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.405179 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.413471 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565356 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565740 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565791 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.667755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.667836 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.668295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.674386 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.674942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.685238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.758057 4907 generic.go:334] "Generic (PLEG): container finished" podID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerID="6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.758403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.762265 4907 generic.go:334] "Generic (PLEG): container finished" podID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerID="9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.762366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.768910 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"fc62989a2170f69bb2ca0015679f24a3a2e45c2c211bcc5ef6b0ed5f362736d1"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.768977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"6fb382f51c20c62e57b816a0aedce41d2313c9641951bf7b79c5dc7f9169ab53"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.778818 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerStarted","Data":"eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.779047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerStarted","Data":"15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.780463 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.788808 4907 generic.go:334] "Generic (PLEG): container finished" podID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerID="008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.788880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.839927 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz-config-rtl4s" podStartSLOduration=1.839898996 podStartE2EDuration="1.839898996s" podCreationTimestamp="2026-01-27 18:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:00.814277412 +0000 UTC m=+1275.943560014" watchObservedRunningTime="2026-01-27 18:27:00.839898996 +0000 UTC m=+1275.969181608" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.854228 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:00 crc kubenswrapper[4907]: W0127 18:27:00.859343 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b62675_d164_4a1a_b3a3_e21eda5b7190.slice/crio-059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04 WatchSource:0}: Error finding container 059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04: Status 404 returned error can't find the container with id 059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.110935 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.332091 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:01 crc kubenswrapper[4907]: W0127 18:27:01.656335 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0791214_d591_446c_a64a_e1e0f237392e.slice/crio-77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c WatchSource:0}: Error finding container 77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c: Status 404 returned error can't find the container with id 77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.815054 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.815282 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.818259 4907 generic.go:334] "Generic (PLEG): container finished" podID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerID="eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b" exitCode=0 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.818335 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerDied","Data":"eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820049 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerID="16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed" exitCode=0 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerDied","Data":"16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820115 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerStarted","Data":"059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.821912 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.822680 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.823574 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerStarted","Data":"77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.825487 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.825830 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.841475 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=55.129481049 podStartE2EDuration="1m3.841456283s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.913796763 +0000 UTC m=+1229.043079375" lastFinishedPulling="2026-01-27 18:26:22.625771997 +0000 UTC m=+1237.755054609" observedRunningTime="2026-01-27 18:27:01.834628927 +0000 UTC m=+1276.963911539" watchObservedRunningTime="2026-01-27 18:27:01.841456283 +0000 UTC m=+1276.970738895" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.861088 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.493836948 podStartE2EDuration="1m3.861069915s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.844264839 +0000 UTC m=+1228.973547451" lastFinishedPulling="2026-01-27 18:26:23.211497766 +0000 UTC m=+1238.340780418" observedRunningTime="2026-01-27 18:27:01.85460168 +0000 UTC m=+1276.983884292" watchObservedRunningTime="2026-01-27 18:27:01.861069915 +0000 UTC m=+1276.990352527" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.922281 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=54.696237644 podStartE2EDuration="1m3.922257941s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.985655844 +0000 UTC m=+1229.114938456" lastFinishedPulling="2026-01-27 18:26:23.211676131 +0000 UTC m=+1238.340958753" observedRunningTime="2026-01-27 18:27:01.894129764 +0000 UTC m=+1277.023412386" watchObservedRunningTime="2026-01-27 18:27:01.922257941 +0000 UTC m=+1277.051540553" Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889151 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"3bf18378195508547b70b89096bf372a7dc672ffe170cbac40d99307c96a1858"} Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889400 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"2dfba7a355e582c2581f135d01bb43a9840b2c192c5a3f3869769f73c76aaadf"} Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"5bdd99948c581772fceb902e534dba55a63f64d1b33c3ce75125e4ee4aa82231"} Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.740391 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.742968 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.746493 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gmz5" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.746816 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779345 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779388 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.812522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881765 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881868 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.891387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.918465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.919473 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-96prz" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.920163 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.921341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.089329 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.615182 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.639227 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704332 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704417 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704515 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704788 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704875 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704884 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run" (OuterVolumeSpecName: "var-run") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705499 4907 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705811 4907 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705827 4907 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6b62675-d164-4a1a-b3a3-e21eda5b7190" (UID: "a6b62675-d164-4a1a-b3a3-e21eda5b7190"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706519 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts" (OuterVolumeSpecName: "scripts") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.736723 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx" (OuterVolumeSpecName: "kube-api-access-5z2fx") pod "a6b62675-d164-4a1a-b3a3-e21eda5b7190" (UID: "a6b62675-d164-4a1a-b3a3-e21eda5b7190"). InnerVolumeSpecName "kube-api-access-5z2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.744728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24" (OuterVolumeSpecName: "kube-api-access-gcj24") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "kube-api-access-gcj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812074 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812109 4907 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812119 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812129 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812138 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921152 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerDied","Data":"15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921249 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerDied","Data":"059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951614 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951691 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.961890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerStarted","Data":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.975107 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"0b11668a2479dd38f2210be73282a21393f490a4643d43ddb56a8e4c5a9f584a"} Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.005596 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.13421466 podStartE2EDuration="5.005572285s" podCreationTimestamp="2026-01-27 18:27:00 +0000 UTC" firstStartedPulling="2026-01-27 18:27:01.693813908 +0000 UTC m=+1276.823096520" lastFinishedPulling="2026-01-27 18:27:04.565171533 +0000 UTC m=+1279.694454145" observedRunningTime="2026-01-27 18:27:04.983737558 +0000 UTC m=+1280.113020170" watchObservedRunningTime="2026-01-27 18:27:05.005572285 +0000 UTC m=+1280.134854897" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.467069 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.747316 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.759178 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.842911 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:05 crc kubenswrapper[4907]: E0127 18:27:05.843363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843384 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: E0127 18:27:05.843399 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843406 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843652 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843681 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.844366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.848260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.893213 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.946836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947615 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947840 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947915 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947992 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.049394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050173 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.049916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050332 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051379 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.053176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.095275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.157964 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.284145 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.292445 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.000820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerStarted","Data":"f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b"} Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.446208 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:07 crc kubenswrapper[4907]: W0127 18:27:07.597587 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c6a42d_7a62_485f_8700_55b962892c25.slice/crio-f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee WatchSource:0}: Error finding container f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee: Status 404 returned error can't find the container with id f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.762415 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" path="/var/lib/kubelet/pods/a6b62675-d164-4a1a-b3a3-e21eda5b7190/volumes" Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.763151 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" path="/var/lib/kubelet/pods/abd14b5b-15ac-4d30-8105-13f40a1edb77/volumes" Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.012176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerStarted","Data":"dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.012225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerStarted","Data":"f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.014629 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.031229 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz-config-tbhbh" podStartSLOduration=3.031209474 podStartE2EDuration="3.031209474s" podCreationTimestamp="2026-01-27 18:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:08.026925802 +0000 UTC m=+1283.156208424" watchObservedRunningTime="2026-01-27 18:27:08.031209474 +0000 UTC m=+1283.160492086" Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.055616 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.860779106 podStartE2EDuration="32.055578713s" podCreationTimestamp="2026-01-27 18:26:36 +0000 UTC" firstStartedPulling="2026-01-27 18:26:47.463484801 +0000 UTC m=+1262.592767413" lastFinishedPulling="2026-01-27 18:27:07.658284408 +0000 UTC m=+1282.787567020" observedRunningTime="2026-01-27 18:27:08.051387123 +0000 UTC m=+1283.180669735" watchObservedRunningTime="2026-01-27 18:27:08.055578713 +0000 UTC m=+1283.184861325" Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.027082 4907 generic.go:334] "Generic (PLEG): container finished" podID="31c6a42d-7a62-485f-8700-55b962892c25" containerID="dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619" exitCode=0 Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.027305 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerDied","Data":"dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034658 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"35be2fdfabf0064873b97bf963c92542f99a5d337ab5074770d11482006e4e37"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034700 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"619472d3a4c0ca4aee907bfc905d02d88a8d090c287d29372d17cedfaa2c2d13"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"e7ad65ecf9bcaf9b82557351175feb182542e53ab6291b28e94b806be1ba7d1e"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"6153749aa95cc3a10194e64676b36e462ca981a794b90a942ccf1db841c40676"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.913752 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.991476 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.994391 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.000230 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.027296 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.037918 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.038247 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"9b96c9d8feb6c12fd8fbb0259bdfd119dc2c2f511c8da9fa4888aae48398b2ed"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"cf12bd9f5c25df971c7a14246c30a5ac64c41b52bcbfffc7cd7a68248cf07fe3"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"9234a800ab060fee0a1b2c8004047b45d0052eff7cbfbc654a596185a27b05d6"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.111494 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=23.197915179 podStartE2EDuration="35.111471719s" podCreationTimestamp="2026-01-27 18:26:35 +0000 UTC" firstStartedPulling="2026-01-27 18:26:56.190571176 +0000 UTC m=+1271.319853788" lastFinishedPulling="2026-01-27 18:27:08.104127716 +0000 UTC m=+1283.233410328" observedRunningTime="2026-01-27 18:27:10.092295979 +0000 UTC m=+1285.221578601" watchObservedRunningTime="2026-01-27 18:27:10.111471719 +0000 UTC m=+1285.240754331" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.143095 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.143253 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.144095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.172609 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.320828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.438106 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.440134 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.441752 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448884 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448933 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.451140 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.452745 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.549918 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.549971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550058 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550145 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550189 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550588 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550703 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run" (OuterVolumeSpecName: "var-run") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.552613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.554816 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.554808 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts" (OuterVolumeSpecName: "scripts") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555860 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.585175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.587377 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw" (OuterVolumeSpecName: "kube-api-access-nf5dw") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "kube-api-access-nf5dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653420 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653817 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653834 4907 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653845 4907 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653857 4907 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653866 4907 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.765438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.923039 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:10 crc kubenswrapper[4907]: W0127 18:27:10.963895 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887a018b_78e7_4ae0_9db1_8d6d236a0773.slice/crio-88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568 WatchSource:0}: Error finding container 88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568: Status 404 returned error can't find the container with id 88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568 Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067425 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerDied","Data":"f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee"} Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067752 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee" Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.070759 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerStarted","Data":"88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568"} Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.257372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:11 crc kubenswrapper[4907]: W0127 18:27:11.265054 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519051cc_696d_4d4b_9dc1_a23d7689e7fc.slice/crio-781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d WatchSource:0}: Error finding container 781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d: Status 404 returned error can't find the container with id 781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.561590 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.579128 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.770389 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c6a42d-7a62-485f-8700-55b962892c25" path="/var/lib/kubelet/pods/31c6a42d-7a62-485f-8700-55b962892c25/volumes" Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.060904 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.090110 4907 generic.go:334] "Generic (PLEG): container finished" podID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerID="aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f" exitCode=0 Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.090199 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerDied","Data":"aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f"} Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.092799 4907 generic.go:334] "Generic (PLEG): container finished" podID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerID="6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e" exitCode=0 Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.092932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e"} Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.093143 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerStarted","Data":"781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d"} Jan 27 18:27:13 crc kubenswrapper[4907]: I0127 18:27:13.105269 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerStarted","Data":"3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7"} Jan 27 18:27:13 crc kubenswrapper[4907]: I0127 18:27:13.135807 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podStartSLOduration=3.135790022 podStartE2EDuration="3.135790022s" podCreationTimestamp="2026-01-27 18:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:13.130761478 +0000 UTC m=+1288.260044100" watchObservedRunningTime="2026-01-27 18:27:13.135790022 +0000 UTC m=+1288.265072634" Jan 27 18:27:14 crc kubenswrapper[4907]: I0127 18:27:14.116112 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.641886 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.912759 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.928086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.022752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.767833 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.839230 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.839493 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" containerID="cri-o://2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" gracePeriod=10 Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.061742 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.065014 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.192694 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerID="2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" exitCode=0 Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.192816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8"} Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.194218 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.421056 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.421694 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5d9vz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-d856z_openstack(1e2cf5dd-be65-4237-b77e-9bcc84cd26de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.422955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-d856z" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.487105 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.537982 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"887a018b-78e7-4ae0-9db1-8d6d236a0773\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"887a018b-78e7-4ae0-9db1-8d6d236a0773\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "887a018b-78e7-4ae0-9db1-8d6d236a0773" (UID: "887a018b-78e7-4ae0-9db1-8d6d236a0773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538721 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.566603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx" (OuterVolumeSpecName: "kube-api-access-6nkvx") pod "887a018b-78e7-4ae0-9db1-8d6d236a0773" (UID: "887a018b-78e7-4ae0-9db1-8d6d236a0773"). InnerVolumeSpecName "kube-api-access-6nkvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.640932 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.824234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951634 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951839 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951875 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.957390 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx" (OuterVolumeSpecName: "kube-api-access-jgjcx") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "kube-api-access-jgjcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.004292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.018636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config" (OuterVolumeSpecName: "config") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.024978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.037738 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054171 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054211 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054226 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054242 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054252 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"14af702ae586c0a32f8c72ffae79c9a4feed72f954b871e63a6bcedfd4970e82"} Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223628 4907 scope.go:117] "RemoveContainer" containerID="2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225685 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerDied","Data":"88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568"} Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225734 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225708 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:24 crc kubenswrapper[4907]: E0127 18:27:24.226908 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-d856z" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.246718 4907 scope.go:117] "RemoveContainer" containerID="9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.296410 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.309739 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605660 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605992 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" containerID="cri-o://f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.606048 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" containerID="cri-o://350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605953 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" containerID="cri-o://2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.779502 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" path="/var/lib/kubelet/pods/ef031d23-3f7c-40b7-b2f1-72863036ca69/volumes" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.248983 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249019 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249029 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249285 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.347492 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.362022 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.524427 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.524488 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.739702 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817525 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817626 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817688 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817707 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817939 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817976 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818497 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818670 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818766 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.819032 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825037 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825280 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out" (OuterVolumeSpecName: "config-out") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825338 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config" (OuterVolumeSpecName: "config") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.831726 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.834722 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf" (OuterVolumeSpecName: "kube-api-access-8tslf") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "kube-api-access-8tslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.838748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "pvc-f7807fd9-6025-4711-8134-26e284a305f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.851953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config" (OuterVolumeSpecName: "web-config") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921532 4907 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921594 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921611 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921621 4907 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921630 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921686 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921699 4907 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921708 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921716 4907 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.943376 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.943517 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7807fd9-6025-4711-8134-26e284a305f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6") on node "crc" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.023255 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68"} Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261640 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261927 4907 scope.go:117] "RemoveContainer" containerID="f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.285435 4907 scope.go:117] "RemoveContainer" containerID="350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.303172 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.307969 4907 scope.go:117] "RemoveContainer" containerID="2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.316913 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.329819 4907 scope.go:117] "RemoveContainer" containerID="0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.354513 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355004 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355022 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355043 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355318 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355331 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355338 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355370 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355378 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355403 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="init-config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355409 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="init-config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355422 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="init" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355429 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="init" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355454 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355463 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355481 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355489 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355748 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355770 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355786 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355799 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355818 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.357971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.361635 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.361847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362017 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362278 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362475 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363202 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363350 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363539 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.382505 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.385373 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435833 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435871 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435949 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436016 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436063 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436079 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436106 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436126 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538517 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538681 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538724 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539036 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539530 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539582 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.540613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.542160 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543878 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543913 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.545595 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.547063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.547682 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.557410 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.560498 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.561957 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.595208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.736171 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.760897 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" path="/var/lib/kubelet/pods/07d384d2-43f4-4290-837f-fb784fc28b37/volumes" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.761967 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" path="/var/lib/kubelet/pods/887a018b-78e7-4ae0-9db1-8d6d236a0773/volumes" Jan 27 18:27:28 crc kubenswrapper[4907]: I0127 18:27:28.389653 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.285458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"c5bd11e769f6116806aa8b861b9bf19795572d8b7ccbfb13f52cfc9c500b44fc"} Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.642846 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.927823 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.333081 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.334760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.344521 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.399141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.399210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501015 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501925 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.536157 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.537305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.538280 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.551770 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.575998 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.577366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.583680 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.602774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.602994 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.603059 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.603201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.617282 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.646680 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.648046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.651739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.652276 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.652466 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.653074 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.653320 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.660116 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.670100 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.671829 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.680180 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.704966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705016 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705075 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705158 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705190 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705334 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.707276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.714183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.714673 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.716070 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.719038 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.719653 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.764355 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.767104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.786744 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.800128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821703 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821938 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.822092 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.828174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.850859 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.852740 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.858435 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.863055 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.877714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.881712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.899742 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.924734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.935029 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.936924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.973207 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.973968 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.975459 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.978402 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982595 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982628 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982660 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982692 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982717 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.983145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.983277 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.000405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.020742 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084086 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084219 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084293 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.085680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.086274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.086856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.087381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.109296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.114961 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.115503 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.139305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.154371 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.281160 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.332776 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.346283 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.359983 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.361523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.365578 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.394110 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.406967 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.499671 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.500018 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.603303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.603368 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.604326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.760027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.769373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.961935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.997821 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.029046 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.067917 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.106799 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.225807 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.336984 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.353731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerStarted","Data":"1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.357148 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerStarted","Data":"7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.359351 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerStarted","Data":"68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.359390 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerStarted","Data":"89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.381644 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.383892 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerStarted","Data":"e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.385744 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerStarted","Data":"85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.391419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerStarted","Data":"af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.393687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerStarted","Data":"c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.411099 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-jsvqc" podStartSLOduration=2.411082672 podStartE2EDuration="2.411082672s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:32.37854027 +0000 UTC m=+1307.507822882" watchObservedRunningTime="2026-01-27 18:27:32.411082672 +0000 UTC m=+1307.540365274" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.416322 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.610529 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qz6th" podStartSLOduration=2.6105141549999997 podStartE2EDuration="2.610514155s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:32.455830074 +0000 UTC m=+1307.585112676" watchObservedRunningTime="2026-01-27 18:27:32.610514155 +0000 UTC m=+1307.739796767" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.615785 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.738574 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:33 crc kubenswrapper[4907]: E0127 18:27:33.323333 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5cca69_8afc_417f_9f22_93c279730bf7.slice/crio-conmon-63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.411847 4907 generic.go:334] "Generic (PLEG): container finished" podID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerID="6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.411968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerDied","Data":"6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.415011 4907 generic.go:334] "Generic (PLEG): container finished" podID="85c8faae-95fb-4533-b45c-51e91bb95947" containerID="68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.415100 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerDied","Data":"68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.417687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerStarted","Data":"948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.417733 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerStarted","Data":"0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.420097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerStarted","Data":"3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.420154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerStarted","Data":"1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.422182 4907 generic.go:334] "Generic (PLEG): container finished" podID="701eaff9-db27-4bff-975c-b8ebf034725f" containerID="f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.422229 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerDied","Data":"f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.424232 4907 generic.go:334] "Generic (PLEG): container finished" podID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerID="63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.424309 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerDied","Data":"63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.432716 4907 generic.go:334] "Generic (PLEG): container finished" podID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerID="902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.432781 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerDied","Data":"902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.434904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerStarted","Data":"3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.434925 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerStarted","Data":"27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.438530 4907 generic.go:334] "Generic (PLEG): container finished" podID="32b7a898-5d57-496a-8ad1-380b636e3629" containerID="40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.439534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerDied","Data":"40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.472984 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-f55d-account-create-update-gfk7k" podStartSLOduration=3.472960522 podStartE2EDuration="3.472960522s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.461199626 +0000 UTC m=+1308.590482238" watchObservedRunningTime="2026-01-27 18:27:33.472960522 +0000 UTC m=+1308.602243154" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.485679 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lpvwr" podStartSLOduration=3.485661696 podStartE2EDuration="3.485661696s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.476852384 +0000 UTC m=+1308.606134996" watchObservedRunningTime="2026-01-27 18:27:33.485661696 +0000 UTC m=+1308.614944308" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.570094 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fxqjb" podStartSLOduration=2.570077205 podStartE2EDuration="2.570077205s" podCreationTimestamp="2026-01-27 18:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.565008749 +0000 UTC m=+1308.694291361" watchObservedRunningTime="2026-01-27 18:27:33.570077205 +0000 UTC m=+1308.699359817" Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.448064 4907 generic.go:334] "Generic (PLEG): container finished" podID="b54f9573-0bd6-4133-872a-b9e73129d654" containerID="948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.448129 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerDied","Data":"948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45"} Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.451066 4907 generic.go:334] "Generic (PLEG): container finished" podID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerID="3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.451368 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerDied","Data":"3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944"} Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.453196 4907 generic.go:334] "Generic (PLEG): container finished" podID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerID="3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.453256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerDied","Data":"3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.331607 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.340916 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.358366 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.365233 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.377013 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381138 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"701eaff9-db27-4bff-975c-b8ebf034725f\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381358 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"7844ef4e-92dd-4ea6-a792-b255290ef833\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381417 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"32b7a898-5d57-496a-8ad1-380b636e3629\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381496 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"32b7a898-5d57-496a-8ad1-380b636e3629\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381527 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"85c8faae-95fb-4533-b45c-51e91bb95947\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381576 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"7844ef4e-92dd-4ea6-a792-b255290ef833\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381618 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"85c8faae-95fb-4533-b45c-51e91bb95947\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381681 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"701eaff9-db27-4bff-975c-b8ebf034725f\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.395854 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7844ef4e-92dd-4ea6-a792-b255290ef833" (UID: "7844ef4e-92dd-4ea6-a792-b255290ef833"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.395969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32b7a898-5d57-496a-8ad1-380b636e3629" (UID: "32b7a898-5d57-496a-8ad1-380b636e3629"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.396465 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85c8faae-95fb-4533-b45c-51e91bb95947" (UID: "85c8faae-95fb-4533-b45c-51e91bb95947"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.397687 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "701eaff9-db27-4bff-975c-b8ebf034725f" (UID: "701eaff9-db27-4bff-975c-b8ebf034725f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.406051 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.465263 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn" (OuterVolumeSpecName: "kube-api-access-m4hpn") pod "701eaff9-db27-4bff-975c-b8ebf034725f" (UID: "701eaff9-db27-4bff-975c-b8ebf034725f"). InnerVolumeSpecName "kube-api-access-m4hpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.466387 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx" (OuterVolumeSpecName: "kube-api-access-cwkrx") pod "32b7a898-5d57-496a-8ad1-380b636e3629" (UID: "32b7a898-5d57-496a-8ad1-380b636e3629"). InnerVolumeSpecName "kube-api-access-cwkrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.478648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t" (OuterVolumeSpecName: "kube-api-access-flr2t") pod "85c8faae-95fb-4533-b45c-51e91bb95947" (UID: "85c8faae-95fb-4533-b45c-51e91bb95947"). InnerVolumeSpecName "kube-api-access-flr2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484670 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"c3998964-67eb-4adb-912d-a6367ae3beaf\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"c3998964-67eb-4adb-912d-a6367ae3beaf\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"ac5cca69-8afc-417f-9f22-93c279730bf7\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484925 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"ac5cca69-8afc-417f-9f22-93c279730bf7\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.485091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3998964-67eb-4adb-912d-a6367ae3beaf" (UID: "c3998964-67eb-4adb-912d-a6367ae3beaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.486755 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac5cca69-8afc-417f-9f22-93c279730bf7" (UID: "ac5cca69-8afc-417f-9f22-93c279730bf7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.488334 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf" (OuterVolumeSpecName: "kube-api-access-28rdf") pod "ac5cca69-8afc-417f-9f22-93c279730bf7" (UID: "ac5cca69-8afc-417f-9f22-93c279730bf7"). InnerVolumeSpecName "kube-api-access-28rdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.488549 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw" (OuterVolumeSpecName: "kube-api-access-stqxw") pod "c3998964-67eb-4adb-912d-a6367ae3beaf" (UID: "c3998964-67eb-4adb-912d-a6367ae3beaf"). InnerVolumeSpecName "kube-api-access-stqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489541 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489580 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489592 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489603 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489613 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489623 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489635 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489646 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489656 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489665 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489700 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerDied","Data":"1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494819 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494930 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.498397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l" (OuterVolumeSpecName: "kube-api-access-6w54l") pod "7844ef4e-92dd-4ea6-a792-b255290ef833" (UID: "7844ef4e-92dd-4ea6-a792-b255290ef833"). InnerVolumeSpecName "kube-api-access-6w54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerDied","Data":"7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500713 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500813 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.514413 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerID="f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643" exitCode=0 Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.514521 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerDied","Data":"f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528351 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerDied","Data":"27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528888 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.545970 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerDied","Data":"85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.546007 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.546087 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.547322 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.548760 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.554592 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerDied","Data":"c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.555185 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.554876 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565117 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerDied","Data":"89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565346 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.596307 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.600970 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.601072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerDied","Data":"0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.601095 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.611963 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.612455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerDied","Data":"1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.612491 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerDied","Data":"af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632166 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632256 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700655 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"d206e054-cdc8-4a59-9de8-93bfeae80700\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700708 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"b54f9573-0bd6-4133-872a-b9e73129d654\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"421865e2-2878-4bc4-9480-7afb5e7133fd\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700792 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"421865e2-2878-4bc4-9480-7afb5e7133fd\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700846 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"d206e054-cdc8-4a59-9de8-93bfeae80700\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700867 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"b54f9573-0bd6-4133-872a-b9e73129d654\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.702184 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "421865e2-2878-4bc4-9480-7afb5e7133fd" (UID: "421865e2-2878-4bc4-9480-7afb5e7133fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704008 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jjm2k" podStartSLOduration=2.671313185 podStartE2EDuration="8.703990677s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="2026-01-27 18:27:32.106604039 +0000 UTC m=+1307.235886651" lastFinishedPulling="2026-01-27 18:27:38.139281531 +0000 UTC m=+1313.268564143" observedRunningTime="2026-01-27 18:27:38.695479993 +0000 UTC m=+1313.824762605" watchObservedRunningTime="2026-01-27 18:27:38.703990677 +0000 UTC m=+1313.833273289" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d206e054-cdc8-4a59-9de8-93bfeae80700" (UID: "d206e054-cdc8-4a59-9de8-93bfeae80700"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt" (OuterVolumeSpecName: "kube-api-access-pwfbt") pod "421865e2-2878-4bc4-9480-7afb5e7133fd" (UID: "421865e2-2878-4bc4-9480-7afb5e7133fd"). InnerVolumeSpecName "kube-api-access-pwfbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.705390 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2" (OuterVolumeSpecName: "kube-api-access-wmgn2") pod "d206e054-cdc8-4a59-9de8-93bfeae80700" (UID: "d206e054-cdc8-4a59-9de8-93bfeae80700"). InnerVolumeSpecName "kube-api-access-wmgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.707975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b54f9573-0bd6-4133-872a-b9e73129d654" (UID: "b54f9573-0bd6-4133-872a-b9e73129d654"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.713870 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599" (OuterVolumeSpecName: "kube-api-access-w5599") pod "b54f9573-0bd6-4133-872a-b9e73129d654" (UID: "b54f9573-0bd6-4133-872a-b9e73129d654"). InnerVolumeSpecName "kube-api-access-w5599". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802911 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802922 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802971 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802983 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802992 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.644924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerStarted","Data":"fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9"} Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.649899 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.650222 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"a57186f91849ce1a4ed109cfc013b416cbe04dcb095c0b6377b4eab7974c4186"} Jan 27 18:27:41 crc kubenswrapper[4907]: I0127 18:27:41.674605 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerStarted","Data":"8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc"} Jan 27 18:27:41 crc kubenswrapper[4907]: I0127 18:27:41.714341 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-d856z" podStartSLOduration=5.506933106 podStartE2EDuration="38.714320936s" podCreationTimestamp="2026-01-27 18:27:03 +0000 UTC" firstStartedPulling="2026-01-27 18:27:06.972584681 +0000 UTC m=+1282.101867293" lastFinishedPulling="2026-01-27 18:27:40.179972511 +0000 UTC m=+1315.309255123" observedRunningTime="2026-01-27 18:27:41.705216165 +0000 UTC m=+1316.834498777" watchObservedRunningTime="2026-01-27 18:27:41.714320936 +0000 UTC m=+1316.843603548" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.687180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"1dea172c6d28f2691392445aa77bfbba4308ce3ace2c0e3b5d9e46722da81c85"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.687621 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"21b343aff3cb89a3b9711953b78bf2cea639d421de01b2b882ae43fd82fcf8bc"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.691233 4907 generic.go:334] "Generic (PLEG): container finished" podID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerID="fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9" exitCode=0 Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.691282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerDied","Data":"fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.736934 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.736990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.743302 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.743274892 podStartE2EDuration="15.743274892s" podCreationTimestamp="2026-01-27 18:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:42.730520837 +0000 UTC m=+1317.859803499" watchObservedRunningTime="2026-01-27 18:27:42.743274892 +0000 UTC m=+1317.872557514" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.746733 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:43 crc kubenswrapper[4907]: I0127 18:27:43.704951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.235433 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362831 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362975 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.382830 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v" (OuterVolumeSpecName: "kube-api-access-qwn5v") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "kube-api-access-qwn5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.465265 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.477856 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data" (OuterVolumeSpecName: "config-data") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.482886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.566692 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.566716 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712157 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712144 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerDied","Data":"e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95"} Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712283 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.938975 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945616 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945645 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945663 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945670 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945688 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945694 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945705 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945710 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945718 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945725 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945738 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945744 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945760 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945765 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945777 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945783 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945791 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945796 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945811 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945816 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946077 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946088 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946095 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946106 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946116 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946129 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946137 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946145 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946344 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946353 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.947600 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.971457 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.999528 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.001318 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006276 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006629 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006754 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.007009 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.007109 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.026340 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080458 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080526 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080565 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080677 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.081119 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.081217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.132899 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.134172 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.143300 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-865rb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.144027 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.146012 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.182944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183270 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183449 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183506 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183580 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183608 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183776 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.184169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.194180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.195127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.209375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.209653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.211106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.212710 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.221201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.254307 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.255608 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.255891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257755 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257973 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.270355 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.282683 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.284610 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.287757 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.287999 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.288175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294455 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294679 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.296094 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.326536 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.337930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.367339 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.368799 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.373877 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.374111 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.374480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.389335 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.394832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400231 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400596 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.401519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.401713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.413862 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.419615 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.422243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.441464 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.446326 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.501347 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517068 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517125 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517308 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517419 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517589 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.522533 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.537130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.540149 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.547217 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.552973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.554131 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.554572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.557128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.563175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.565443 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.586226 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.592632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.604108 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619465 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619549 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619586 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619655 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619746 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.620630 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.624828 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627087 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627858 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627960 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.628225 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.659593 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.675399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.708842 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.716011 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723568 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723902 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.726790 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.727473 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.731218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.733734 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.742205 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.742433 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.743924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.760906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.767643 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.808139 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.814826 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.830080 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.830120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831056 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831328 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831643 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.832162 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.832725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.834953 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.864804 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.932927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933350 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933391 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.935151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936357 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936733 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.938314 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.938808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.956542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.961822 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.966730 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.968760 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.976289 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.062669 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.087870 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.282828 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.329025 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.762691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerStarted","Data":"9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59"} Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.768963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerStarted","Data":"ef82f5a51c0b5b446b1227793ccf32b81c18c3d762371e596ecaa582f532aa19"} Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.782090 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.091658 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.114568 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:47 crc kubenswrapper[4907]: W0127 18:27:47.126038 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e539a06_3352_4163_a259_6fd53182fe02.slice/crio-41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2 WatchSource:0}: Error finding container 41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2: Status 404 returned error can't find the container with id 41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2 Jan 27 18:27:47 crc kubenswrapper[4907]: W0127 18:27:47.138172 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde23a4c9_a62e_4523_8480_b19f3f10f586.slice/crio-58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b WatchSource:0}: Error finding container 58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b: Status 404 returned error can't find the container with id 58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.183032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.191874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.203287 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.299969 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.795152 4907 generic.go:334] "Generic (PLEG): container finished" podID="fa089541-ea04-43f9-b816-fcde07a28a99" containerID="386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3" exitCode=0 Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.795446 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerDied","Data":"386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.807034 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerStarted","Data":"7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.807072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerStarted","Data":"41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.826839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerStarted","Data":"58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829111 4907 generic.go:334] "Generic (PLEG): container finished" podID="89f27191-1460-4103-a832-acf1b0a8eca1" containerID="9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7" exitCode=0 Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829169 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerStarted","Data":"9f9319897192a3481febde0075f2336b2935d057d9a13f6811945fb00c33176e"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.836916 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerStarted","Data":"fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.843224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerStarted","Data":"840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.863769 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8p796" podStartSLOduration=2.863747241 podStartE2EDuration="2.863747241s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:47.842838202 +0000 UTC m=+1322.972120824" watchObservedRunningTime="2026-01-27 18:27:47.863747241 +0000 UTC m=+1322.993029853" Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.881119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerStarted","Data":"9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.897793 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"f21eb03c891c1ca372e748b6131d8a4413d1eb5b66cad03fa9fdb685e87a089a"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.922142 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerStarted","Data":"f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.994874 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nsfdn" podStartSLOduration=3.994856737 podStartE2EDuration="3.994856737s" podCreationTimestamp="2026-01-27 18:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:47.944033881 +0000 UTC m=+1323.073316493" watchObservedRunningTime="2026-01-27 18:27:47.994856737 +0000 UTC m=+1323.124139349" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.197074 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.378929 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533340 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533705 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533797 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533894 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533948 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.534001 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.558381 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g" (OuterVolumeSpecName: "kube-api-access-8zh5g") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "kube-api-access-8zh5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.599033 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.627339 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641231 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641259 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641270 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.643648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.649806 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.693365 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config" (OuterVolumeSpecName: "config") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743302 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743341 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743350 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.966099 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerStarted","Data":"f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a"} Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.966188 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.987947 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podStartSLOduration=3.9879313659999998 podStartE2EDuration="3.987931366s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:48.980994098 +0000 UTC m=+1324.110276720" watchObservedRunningTime="2026-01-27 18:27:48.987931366 +0000 UTC m=+1324.117213978" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.989835 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.990177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerDied","Data":"ef82f5a51c0b5b446b1227793ccf32b81c18c3d762371e596ecaa582f532aa19"} Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.990214 4907 scope.go:117] "RemoveContainer" containerID="386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3" Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.119067 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.164289 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.765258 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" path="/var/lib/kubelet/pods/fa089541-ea04-43f9-b816-fcde07a28a99/volumes" Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.060600 4907 generic.go:334] "Generic (PLEG): container finished" podID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerID="9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd" exitCode=0 Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.060677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerDied","Data":"9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd"} Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.064368 4907 generic.go:334] "Generic (PLEG): container finished" podID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerID="8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc" exitCode=0 Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.064414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerDied","Data":"8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc"} Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.553913 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.567774 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.719981 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720122 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720199 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720224 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720308 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720366 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720423 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728284 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728498 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz" (OuterVolumeSpecName: "kube-api-access-5d9vz") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "kube-api-access-5d9vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728513 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts" (OuterVolumeSpecName: "scripts") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.729897 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.730623 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92" (OuterVolumeSpecName: "kube-api-access-dlc92") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "kube-api-access-dlc92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.736804 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.757812 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.757919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data" (OuterVolumeSpecName: "config-data") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.774266 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.791427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data" (OuterVolumeSpecName: "config-data") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822492 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822527 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822540 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822569 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822578 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822588 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822597 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822606 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822616 4907 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822625 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.938193 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.016487 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.016739 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" containerID="cri-o://3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" gracePeriod=10 Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103675 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerDied","Data":"f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b"} Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103722 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103812 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerDied","Data":"9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59"} Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110099 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110104 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.521950 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.521998 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.638084 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.647323 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.735334 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.735978 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.735999 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.736024 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736031 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.736050 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736056 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736235 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736255 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736268 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.738968 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752077 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752126 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752227 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752476 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752650 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.762522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.848928 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.848983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849216 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.955905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956446 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956513 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.972500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.973064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.978262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.982665 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.987873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.999844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.090589 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.154447 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.156184 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.178840 4907 generic.go:334] "Generic (PLEG): container finished" podID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerID="3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" exitCode=0 Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.178882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7"} Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.187566 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.274944 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275099 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.377951 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378041 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378188 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378307 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378373 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.379091 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.379238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.380095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.380796 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.381547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.404308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.503646 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.764602 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" path="/var/lib/kubelet/pods/2e9b8697-5a15-4a0d-aab8-702699520f6a/volumes" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.982323 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.985046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988310 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988628 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gmz5" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988778 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.033142 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111239 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111612 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111949 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112232 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112354 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.214829 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215116 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215245 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215854 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.216372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.220420 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.224673 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.242378 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.242388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.244539 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.245300 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.372323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.407743 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.467992 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.475587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.488725 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.603743 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636480 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636601 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636641 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739519 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739597 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739823 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739866 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739895 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740031 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.744771 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.745329 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.745368 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.758411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.759327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.766611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.802661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.893613 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.537292 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.626249 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.766898 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Jan 27 18:28:05 crc kubenswrapper[4907]: I0127 18:28:05.766987 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.136491 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.136755 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch8dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-6gppf_openstack(de23a4c9-a62e-4523-8480-b19f3f10f586): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.137989 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-6gppf" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.280187 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-6gppf" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.345230 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.346259 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmb6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x9tl4_openstack(3d3838ba-a929-4aab-a58d-dd4f39628f00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.348528 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x9tl4" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" Jan 27 18:28:10 crc kubenswrapper[4907]: E0127 18:28:10.327107 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-x9tl4" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" Jan 27 18:28:14 crc kubenswrapper[4907]: I0127 18:28:14.368569 4907 generic.go:334] "Generic (PLEG): container finished" podID="9e539a06-3352-4163-a259-6fd53182fe02" containerID="7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c" exitCode=0 Jan 27 18:28:14 crc kubenswrapper[4907]: I0127 18:28:14.368662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerDied","Data":"7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c"} Jan 27 18:28:15 crc kubenswrapper[4907]: I0127 18:28:15.767610 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Jan 27 18:28:15 crc kubenswrapper[4907]: I0127 18:28:15.768160 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.014507 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.015197 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk57z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rl9vb_openstack(90ffb508-65d2-4c20-95db-209a1c9a3399): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.016405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rl9vb" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.173689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.197969 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314333 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314446 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314513 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314542 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314684 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314717 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314845 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.320589 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426" (OuterVolumeSpecName: "kube-api-access-jv426") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "kube-api-access-jv426". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.322480 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm" (OuterVolumeSpecName: "kube-api-access-pcnzm") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "kube-api-access-pcnzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.376182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.376475 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config" (OuterVolumeSpecName: "config") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.378579 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config" (OuterVolumeSpecName: "config") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.378881 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.390485 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.390741 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.391162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d"} Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414580 4907 scope.go:117] "RemoveContainer" containerID="3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414516 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417325 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417377 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417392 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417417 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417427 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417439 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417450 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417460 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.418730 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.419274 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerDied","Data":"41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2"} Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.419313 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.420259 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rl9vb" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.520097 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.531642 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.550399 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.586859 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.587011 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r4jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kbngs_openstack(fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.588318 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kbngs" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.609570 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.610710 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.610815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.610892 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="init" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.610947 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="init" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.611035 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611092 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611379 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611459 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.612763 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.638209 4907 scope.go:117] "RemoveContainer" containerID="6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.642471 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.719443 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.723640 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.730847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.731180 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.731361 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.732063 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752646 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752695 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752720 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752737 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752806 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752828 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752847 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.753032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.766685 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.816133 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" path="/var/lib/kubelet/pods/519051cc-696d-4d4b-9dc1-a23d7689e7fc/volumes" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855512 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855591 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855619 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855641 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855752 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.859312 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.859455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.860327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.861028 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.861413 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.871992 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872618 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.879266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.891081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.019671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.101632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.130925 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.447753 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.470457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7"} Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.516010 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"8945b01efe6f0eefa5770c937a9e0da1d1ed380ebf9c56f44c4665a8df76080a"} Jan 27 18:28:20 crc kubenswrapper[4907]: E0127 18:28:20.613185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kbngs" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.617241 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.653733 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.768499 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.976797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.169441 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:21 crc kubenswrapper[4907]: W0127 18:28:21.194231 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90a7953e_f884_40eb_a25f_356aefbc6b83.slice/crio-6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f WatchSource:0}: Error finding container 6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f: Status 404 returned error can't find the container with id 6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.587254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.588892 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerStarted","Data":"b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.588979 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerStarted","Data":"8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590720 4907 generic.go:334] "Generic (PLEG): container finished" podID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerID="28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c" exitCode=0 Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerStarted","Data":"078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.601504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.606497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"1212c15afb7a7361957c7ebacb8cda44faf9003a09cf0d539ec9f77c3a803903"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616354 4907 generic.go:334] "Generic (PLEG): container finished" podID="27a50802-6236-4a03-8ada-607126ed0127" containerID="c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667" exitCode=0 Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616412 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerDied","Data":"c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerStarted","Data":"ffe96f6fb082d960e1eda4044ba984492c5310cb5c8d156c93612ff111fd1a09"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.622704 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-px4wp" podStartSLOduration=25.622684387 podStartE2EDuration="25.622684387s" podCreationTimestamp="2026-01-27 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:21.614268896 +0000 UTC m=+1356.743551508" watchObservedRunningTime="2026-01-27 18:28:21.622684387 +0000 UTC m=+1356.751966999" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.438718 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520806 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520867 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520952 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.521035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.521066 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.555943 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5" (OuterVolumeSpecName: "kube-api-access-kwhr5") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "kube-api-access-kwhr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.586711 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:22 crc kubenswrapper[4907]: E0127 18:28:22.587348 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.587365 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.593822 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.595362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.602115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.604618 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.605819 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623199 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623492 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.661904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.669840 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.670485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerDied","Data":"ffe96f6fb082d960e1eda4044ba984492c5310cb5c8d156c93612ff111fd1a09"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.670595 4907 scope.go:117] "RemoveContainer" containerID="c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.673388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726066 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726182 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726337 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.734561 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.739041 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.769228 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.772769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.780110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.784242 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.784532 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.785215 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.794243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.799017 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.826419 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config" (OuterVolumeSpecName: "config") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.827981 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828010 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828024 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828036 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.838104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.931116 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.054104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.150353 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.159177 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.722457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerStarted","Data":"86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.725058 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735065 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735274 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" containerID="cri-o://42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" gracePeriod=30 Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735649 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" containerID="cri-o://e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" gracePeriod=30 Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.756187 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podStartSLOduration=4.756168726 podStartE2EDuration="4.756168726s" podCreationTimestamp="2026-01-27 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.742800423 +0000 UTC m=+1358.872083055" watchObservedRunningTime="2026-01-27 18:28:23.756168726 +0000 UTC m=+1358.885451338" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.783730 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.783707105 podStartE2EDuration="26.783707105s" podCreationTimestamp="2026-01-27 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.778045113 +0000 UTC m=+1358.907327725" watchObservedRunningTime="2026-01-27 18:28:23.783707105 +0000 UTC m=+1358.912989717" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.787666 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a50802-6236-4a03-8ada-607126ed0127" path="/var/lib/kubelet/pods/27a50802-6236-4a03-8ada-607126ed0127/volumes" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerStarted","Data":"c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788257 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788269 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.796197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.831871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.840209 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59cf67488d-dzx5l" podStartSLOduration=4.840187203 podStartE2EDuration="4.840187203s" podCreationTimestamp="2026-01-27 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.812700106 +0000 UTC m=+1358.941982718" watchObservedRunningTime="2026-01-27 18:28:23.840187203 +0000 UTC m=+1358.969469815" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.889125 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6gppf" podStartSLOduration=4.772589905 podStartE2EDuration="38.889104875s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.196036873 +0000 UTC m=+1322.325319485" lastFinishedPulling="2026-01-27 18:28:21.312551843 +0000 UTC m=+1356.441834455" observedRunningTime="2026-01-27 18:28:23.875821944 +0000 UTC m=+1359.005104556" watchObservedRunningTime="2026-01-27 18:28:23.889104875 +0000 UTC m=+1359.018387487" Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812543 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" containerID="cri-o://2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" gracePeriod=30 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812640 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" containerID="cri-o://1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" gracePeriod=30 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.817880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.817933 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"8568e05f8defdc45da4b5e782f2fecb663f59f14a17d880cf92f38ca7f0d7c34"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824377 4907 generic.go:334] "Generic (PLEG): container finished" podID="298d34df-8e81-4086-a6fa-d234e71167af" containerID="e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" exitCode=0 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824414 4907 generic.go:334] "Generic (PLEG): container finished" podID="298d34df-8e81-4086-a6fa-d234e71167af" containerID="42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" exitCode=143 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824454 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824499 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.839002 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.838984586 podStartE2EDuration="28.838984586s" podCreationTimestamp="2026-01-27 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:24.831681967 +0000 UTC m=+1359.960964579" watchObservedRunningTime="2026-01-27 18:28:24.838984586 +0000 UTC m=+1359.968267198" Jan 27 18:28:25 crc kubenswrapper[4907]: E0127 18:28:25.235058 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod993b3392_f1fa_4172_b516_5a22b9d636ad.slice/crio-1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod993b3392_f1fa_4172_b516_5a22b9d636ad.slice/crio-conmon-1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.858957 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.859137 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerStarted","Data":"89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877314 4907 generic.go:334] "Generic (PLEG): container finished" podID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerID="1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" exitCode=0 Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877728 4907 generic.go:334] "Generic (PLEG): container finished" podID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerID="2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" exitCode=143 Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x9tl4" podStartSLOduration=2.811639729 podStartE2EDuration="40.877446945s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.197475434 +0000 UTC m=+1322.326758046" lastFinishedPulling="2026-01-27 18:28:25.26328265 +0000 UTC m=+1360.392565262" observedRunningTime="2026-01-27 18:28:25.873396359 +0000 UTC m=+1361.002679001" watchObservedRunningTime="2026-01-27 18:28:25.877446945 +0000 UTC m=+1361.006729557" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.889267 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.889423 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893345 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893739 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"8945b01efe6f0eefa5770c937a9e0da1d1ed380ebf9c56f44c4665a8df76080a"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893777 4907 scope.go:117] "RemoveContainer" containerID="e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.908933 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909110 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909206 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909291 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909518 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909775 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.923742 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.934270 4907 scope.go:117] "RemoveContainer" containerID="42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.941308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt" (OuterVolumeSpecName: "kube-api-access-rccqt") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "kube-api-access-rccqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.943603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs" (OuterVolumeSpecName: "logs") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.957744 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts" (OuterVolumeSpecName: "scripts") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.960989 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54487fdc5c-ktzbt" podStartSLOduration=3.960972638 podStartE2EDuration="3.960972638s" podCreationTimestamp="2026-01-27 18:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:25.947810921 +0000 UTC m=+1361.077093533" watchObservedRunningTime="2026-01-27 18:28:25.960972638 +0000 UTC m=+1361.090255250" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.983961 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (OuterVolumeSpecName: "glance") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.984817 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.009675 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data" (OuterVolumeSpecName: "config-data") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012449 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012565 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012638 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012657 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012673 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012687 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012697 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.062615 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.062773 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8") on node "crc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.119310 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.252229 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.286685 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.313603 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: E0127 18:28:26.315558 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.315663 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: E0127 18:28:26.315743 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.315797 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.316034 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.316124 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.320267 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.330636 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.331107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.344770 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.346823 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427203 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427369 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427449 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427466 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427700 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427734 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427779 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427830 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.428933 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs" (OuterVolumeSpecName: "logs") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.459864 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.478173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (OuterVolumeSpecName: "glance") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520664 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520706 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520743 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.521428 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.521470 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" gracePeriod=600 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529319 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529524 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529908 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529922 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530081 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530102 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530113 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.532908 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.534240 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.535491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.545851 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs" (OuterVolumeSpecName: "kube-api-access-fqhjs") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "kube-api-access-fqhjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.546311 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.546353 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.555765 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.555947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts" (OuterVolumeSpecName: "scripts") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.556266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.564013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.569918 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.577232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.608263 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data" (OuterVolumeSpecName: "config-data") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.630296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632181 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632214 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632224 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632234 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.637138 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.637432 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1") on node "crc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.671284 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.740045 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.916269 4907 generic.go:334] "Generic (PLEG): container finished" podID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerID="b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.916635 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerDied","Data":"b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.941851 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"1212c15afb7a7361957c7ebacb8cda44faf9003a09cf0d539ec9f77c3a803903"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.941903 4907 scope.go:117] "RemoveContainer" containerID="1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.942009 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.963153 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.963223 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.974006 4907 generic.go:334] "Generic (PLEG): container finished" podID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerID="c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.975108 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerDied","Data":"c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9"} Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.001433 4907 scope.go:117] "RemoveContainer" containerID="2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.043378 4907 scope.go:117] "RemoveContainer" containerID="07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.047194 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.083633 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115317 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: E0127 18:28:27.115920 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115942 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: E0127 18:28:27.115964 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115972 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.116226 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.116258 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.118451 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.120898 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.122615 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.154617 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270313 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270400 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270969 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.271127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.271158 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: W0127 18:28:27.318337 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ebee0c_64db_4384_9e27_95691ee28a17.slice/crio-f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13 WatchSource:0}: Error finding container f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13: Status 404 returned error can't find the container with id f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13 Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.327666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.372950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373036 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373094 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373205 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373225 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373275 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.374261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.374373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.380156 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.380206 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.383965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.384155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.389982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.394639 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.394659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.453970 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.753132 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.772029 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298d34df-8e81-4086-a6fa-d234e71167af" path="/var/lib/kubelet/pods/298d34df-8e81-4086-a6fa-d234e71167af/volumes" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.773148 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" path="/var/lib/kubelet/pods/993b3392-f1fa-4172-b516-5a22b9d636ad/volumes" Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.003544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.016430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13"} Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.414088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:29 crc kubenswrapper[4907]: I0127 18:28:29.052698 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4"} Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.020718 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.133099 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.133757 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" containerID="cri-o://f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" gracePeriod=10 Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.937830 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.080906 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerID="89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2" exitCode=0 Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.081019 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerDied","Data":"89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2"} Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.083579 4907 generic.go:334] "Generic (PLEG): container finished" podID="89f27191-1460-4103-a832-acf1b0a8eca1" containerID="f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" exitCode=0 Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.083613 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a"} Jan 27 18:28:31 crc kubenswrapper[4907]: W0127 18:28:31.437750 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde WatchSource:0}: Error finding container a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde: Status 404 returned error can't find the container with id a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.750197 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.762793 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.880483 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881044 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881166 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881264 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881340 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881425 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881465 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.884789 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs" (OuterVolumeSpecName: "logs") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.884951 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr" (OuterVolumeSpecName: "kube-api-access-phpwr") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "kube-api-access-phpwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.885232 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts" (OuterVolumeSpecName: "scripts") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.893635 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.896777 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm" (OuterVolumeSpecName: "kube-api-access-ch8dm") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "kube-api-access-ch8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.896973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.898337 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts" (OuterVolumeSpecName: "scripts") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.928420 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.941017 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.961739 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data" (OuterVolumeSpecName: "config-data") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.976526 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.982735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data" (OuterVolumeSpecName: "config-data") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983595 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983727 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983778 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983897 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984348 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984359 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984370 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984378 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984386 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984393 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984401 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984408 4907 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984416 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984424 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984431 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.005290 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj" (OuterVolumeSpecName: "kube-api-access-c6vvj") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "kube-api-access-c6vvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.093070 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.113382 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config" (OuterVolumeSpecName: "config") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.144852 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"9f9319897192a3481febde0075f2336b2935d057d9a13f6811945fb00c33176e"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148098 4907 scope.go:117] "RemoveContainer" containerID="f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.158059 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175395 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerDied","Data":"58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175435 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175497 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.185886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196182 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196220 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196233 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerDied","Data":"8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201879 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201951 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.210731 4907 scope.go:117] "RemoveContainer" containerID="9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.234681 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.247394 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.258235 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerStarted","Data":"47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.264239 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.296100 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rl9vb" podStartSLOduration=2.544046842 podStartE2EDuration="47.296078012s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:46.803120607 +0000 UTC m=+1321.932403219" lastFinishedPulling="2026-01-27 18:28:31.555151777 +0000 UTC m=+1366.684434389" observedRunningTime="2026-01-27 18:28:32.294233009 +0000 UTC m=+1367.423515631" watchObservedRunningTime="2026-01-27 18:28:32.296078012 +0000 UTC m=+1367.425360614" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.299277 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.299319 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.752252 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.781750 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.795400 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810208 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810367 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.818076 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l" (OuterVolumeSpecName: "kube-api-access-pmb6l") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "kube-api-access-pmb6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.822981 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.886657 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913026 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913073 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913086 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.997675 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998268 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998287 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998320 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="init" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998327 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="init" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998346 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998360 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998403 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998411 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998421 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998428 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998803 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998845 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998856 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:32.999850 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.007923 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008250 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008696 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008971 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.009665 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.025624 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.040512 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.044290 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.049769 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.054187 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.066408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.074946 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.109760 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136313 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136434 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136457 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136499 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136536 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136650 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136708 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136807 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136833 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.137325 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239843 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239913 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239954 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240167 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240257 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240284 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.244931 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.247917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.248109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.248323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.249704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.251055 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.253951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.254124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.255041 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.260388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.264396 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.264610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.271603 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.272764 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.273336 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288778 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerDied","Data":"840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288823 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288918 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.300286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.300341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.314902 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.350424 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.350405647 podStartE2EDuration="6.350405647s" podCreationTimestamp="2026-01-27 18:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:33.349009207 +0000 UTC m=+1368.478291819" watchObservedRunningTime="2026-01-27 18:28:33.350405647 +0000 UTC m=+1368.479688259" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.363279 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.393165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.411631 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.413441 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.432537 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.432901 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.433029 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.462889 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.479367 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.482177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.513497 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.598052 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600633 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600798 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600868 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.602514 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.602492158 podStartE2EDuration="7.602492158s" podCreationTimestamp="2026-01-27 18:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:33.404807285 +0000 UTC m=+1368.534089907" watchObservedRunningTime="2026-01-27 18:28:33.602492158 +0000 UTC m=+1368.731774780" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.652131 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.666233 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: E0127 18:28:33.668935 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3838ba_a929_4aab_a58d_dd4f39628f00.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.684029 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.703893 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704213 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704279 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704326 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704363 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704407 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704461 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704486 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704696 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704782 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704837 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704876 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704909 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.705963 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.706006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.720015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728548 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.744744 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.756799 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.757000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823826 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.830089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.831634 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.833772 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.835260 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" path="/var/lib/kubelet/pods/89f27191-1460-4103-a832-acf1b0a8eca1/volumes" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.837624 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.839064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.840481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.840948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.855158 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.855375 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.869835 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.936434 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.943352 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.989072 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.000101 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030089 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030162 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030209 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148983 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.149337 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.157868 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.160749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.174234 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.252106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.283488 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: W0127 18:28:34.529070 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod345bd96a_9890_4264_886f_edccc999706b.slice/crio-4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345 WatchSource:0}: Error finding container 4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345: Status 404 returned error can't find the container with id 4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345 Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.568380 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.679500 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.319111 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.362998 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.385233 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84847858bd-jp29w" event={"ID":"345bd96a-9890-4264-886f-edccc999706b","Type":"ContainerStarted","Data":"4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345"} Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.402216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"ec0522a7da0f218de9fc662607c664bcc16a29abfa0831436a984aa8182bf481"} Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.486221 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.730185 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.438869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"89aacfe958e6cb0acf3ee1e5bc3aa9599c2b76ad6a48366d7a127a5e6ba2f087"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.439241 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"6d6069b81341f23c84eb8cbe50424ddc2390c73536fc5d8a699801d5814ef5ac"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.439631 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.442863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84847858bd-jp29w" event={"ID":"345bd96a-9890-4264-886f-edccc999706b","Type":"ContainerStarted","Data":"4d33d268c6a75e4b39e48513478196cb131df437afee47998e1a30f151ba37c8"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.443569 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449534 4907 generic.go:334] "Generic (PLEG): container finished" podID="4eb40734-63ad-481e-8830-da770faf9a95" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" exitCode=0 Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerStarted","Data":"9696ceb9132fbe70ed57d119137a4111952838c23ff4de6225536d7aaf063783"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.458433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.458493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"23d12280d966edcb48634c6a46a4a55471151b9b7f710d4d1606e12380bba1d5"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.465546 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"63052a945af669aa2abed93c574f5734b705d5b0401841e6a5ca4ab454c90357"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.482465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"1a7df1f120bf2f34614d97fd9989a69f3305129b2ef83fa48dd4227d716d63ea"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.486777 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7bb5448674-jfs9k" podStartSLOduration=4.486750465 podStartE2EDuration="4.486750465s" podCreationTimestamp="2026-01-27 18:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:36.46492784 +0000 UTC m=+1371.594210462" watchObservedRunningTime="2026-01-27 18:28:36.486750465 +0000 UTC m=+1371.616033077" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.587236 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84847858bd-jp29w" podStartSLOduration=4.587215013 podStartE2EDuration="4.587215013s" podCreationTimestamp="2026-01-27 18:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:36.556205635 +0000 UTC m=+1371.685488247" watchObservedRunningTime="2026-01-27 18:28:36.587215013 +0000 UTC m=+1371.716497625" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.673012 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.673072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.765532 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.767521 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.512724 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414"} Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.513073 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.513100 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.514004 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.693283 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f5bc66894-v82tp" podStartSLOduration=4.693263399 podStartE2EDuration="4.693263399s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:37.534166791 +0000 UTC m=+1372.663449543" watchObservedRunningTime="2026-01-27 18:28:37.693263399 +0000 UTC m=+1372.822546011" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.704272 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.706023 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.708246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.711723 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.721600 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.762657 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.762698 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788828 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.817178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.821093 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892157 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892350 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892441 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892536 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.893994 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.904299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.911867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.954283 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.038633 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.522734 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523256 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523279 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523295 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.398286 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.543074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerStarted","Data":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.550071 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"149b34e0b046759fa665fe2421ca129e91c5f6a2a4c5f345aef8965e55ca7275"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.560664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"0b95f02ae9ee3d72259d4e9811477be48d031b6c298c7518e8906db81acf8674"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.570150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"4195b724a8b5a24a8176a944768a7ddfe9eb962c0dbb1e8ebea60589295d1a54"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.597356 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" podStartSLOduration=6.597333575 podStartE2EDuration="6.597333575s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:39.56851057 +0000 UTC m=+1374.697793182" watchObservedRunningTime="2026-01-27 18:28:39.597333575 +0000 UTC m=+1374.726616187" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.601392 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"d5365e972db180c3c14949f9027ca94f643322944433a6f5db8a03398d64eb46"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"8eb0dd15d856d236ff614dc3060a2dbfd45bb56180b7a264db0f960b52fa40a8"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"30888fffed5c0d8cb46746b9164496c9ea600762ab5c1f070b4569ae375a32f7"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603735 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.607336 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"ea147e6f6cbea0983451774119d0f87c9f47e13823e7e13b3339b578eac6d575"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609716 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609741 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerStarted","Data":"3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.610115 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.635780 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-cc6c576c9-l5q6m" podStartSLOduration=4.306250852 podStartE2EDuration="7.635760224s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="2026-01-27 18:28:35.394262679 +0000 UTC m=+1370.523545291" lastFinishedPulling="2026-01-27 18:28:38.723772051 +0000 UTC m=+1373.853054663" observedRunningTime="2026-01-27 18:28:40.630893395 +0000 UTC m=+1375.760176017" watchObservedRunningTime="2026-01-27 18:28:40.635760224 +0000 UTC m=+1375.765042836" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.656515 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" podStartSLOduration=4.252700118 podStartE2EDuration="7.656500498s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="2026-01-27 18:28:35.394490866 +0000 UTC m=+1370.523773478" lastFinishedPulling="2026-01-27 18:28:38.798291246 +0000 UTC m=+1373.927573858" observedRunningTime="2026-01-27 18:28:40.656055345 +0000 UTC m=+1375.785337957" watchObservedRunningTime="2026-01-27 18:28:40.656500498 +0000 UTC m=+1375.785783100" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.700396 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kbngs" podStartSLOduration=6.405071701 podStartE2EDuration="55.700376375s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.196197467 +0000 UTC m=+1322.325480079" lastFinishedPulling="2026-01-27 18:28:36.491502141 +0000 UTC m=+1371.620784753" observedRunningTime="2026-01-27 18:28:40.689255547 +0000 UTC m=+1375.818538149" watchObservedRunningTime="2026-01-27 18:28:40.700376375 +0000 UTC m=+1375.829658987" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.724862 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c7bdc78db-g6vvs" podStartSLOduration=3.724831316 podStartE2EDuration="3.724831316s" podCreationTimestamp="2026-01-27 18:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:40.722618632 +0000 UTC m=+1375.851901244" watchObservedRunningTime="2026-01-27 18:28:40.724831316 +0000 UTC m=+1375.854113918" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.157207 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.157706 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.161265 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.161397 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.163888 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.164236 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:43 crc kubenswrapper[4907]: I0127 18:28:43.646394 4907 generic.go:334] "Generic (PLEG): container finished" podID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerID="47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28" exitCode=0 Jan 27 18:28:43 crc kubenswrapper[4907]: I0127 18:28:43.646658 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerDied","Data":"47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28"} Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.002474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.097482 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.097803 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" containerID="cri-o://86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" gracePeriod=10 Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.671705 4907 generic.go:334] "Generic (PLEG): container finished" podID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerID="86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" exitCode=0 Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.671990 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f"} Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.020981 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: connect: connection refused" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.820361 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.823631 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.911456 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.912202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.912373 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.919300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z" (OuterVolumeSpecName: "kube-api-access-hk57z") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "kube-api-access-hk57z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.947237 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.015829 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.015872 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.039342 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data" (OuterVolumeSpecName: "config-data") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.119853 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.258619 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.499805 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.698643 4907 generic.go:334] "Generic (PLEG): container finished" podID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerID="3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87" exitCode=0 Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.698735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerDied","Data":"3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87"} Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703797 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerDied","Data":"fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c"} Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703864 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.750569 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerDied","Data":"f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7"} Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.751080 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.759870 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b"} Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.760151 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.842430 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.871004 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897695 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897812 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897848 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897924 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.898673 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.900427 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.910731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl" (OuterVolumeSpecName: "kube-api-access-2r4jl") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "kube-api-access-2r4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.913990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.919679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts" (OuterVolumeSpecName: "scripts") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.954586 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.992969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data" (OuterVolumeSpecName: "config-data") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001369 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001811 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001928 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002172 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002286 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003154 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003257 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003336 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003415 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003492 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.011950 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz" (OuterVolumeSpecName: "kube-api-access-cr2nz") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "kube-api-access-cr2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.077517 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.097923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.098911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.100819 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105285 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105319 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105331 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105340 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105350 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.106038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config" (OuterVolumeSpecName: "config") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.207153 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.593627 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.710794 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800096 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800766 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa"} Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800918 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.804962 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" containerID="cri-o://696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805124 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" containerID="cri-o://afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805179 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" containerID="cri-o://eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805220 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" containerID="cri-o://e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.810447 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.811758 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" containerID="cri-o://086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.811926 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" containerID="cri-o://58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.822275 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": EOF" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.850277 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.541452926 podStartE2EDuration="1m4.850260366s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.328361564 +0000 UTC m=+1322.457644176" lastFinishedPulling="2026-01-27 18:28:48.637169004 +0000 UTC m=+1383.766451616" observedRunningTime="2026-01-27 18:28:49.843637657 +0000 UTC m=+1384.972920269" watchObservedRunningTime="2026-01-27 18:28:49.850260366 +0000 UTC m=+1384.979542978" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.905634 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.970629 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.123004 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.174691 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175156 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175172 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175205 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="init" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175212 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="init" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175221 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175227 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175257 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175263 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175467 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175490 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175501 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.176801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182207 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182382 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182488 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182607 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.207384 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.209802 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.217138 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236281 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236429 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236682 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.237667 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.339773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340100 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340250 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340366 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340394 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340440 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340457 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340548 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.351733 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.354261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.389213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.389989 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.390247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.399155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445078 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445160 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445210 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.446302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.446326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.447265 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.463538 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.465297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.469432 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.480318 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.483512 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.493170 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.493276 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.503455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.540102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551880 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551919 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552056 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552077 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.657824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658534 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658761 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658783 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.659125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.662321 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.677945 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.685012 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.691543 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.698043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.698520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.791103 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.791324 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" containerID="cri-o://8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" gracePeriod=30 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.794071 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" containerID="cri-o://5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" gracePeriod=30 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.811599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.825334 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerID="086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" exitCode=143 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.825394 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866494 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" exitCode=0 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866523 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" exitCode=2 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866585 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.905626 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.913444 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.934068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.980622 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981170 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981334 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083803 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.102393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.104317 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.112363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.112991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.116781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.116915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.126859 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.166400 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.174735 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": read tcp 10.217.0.2:45666->10.217.0.195:9696: read: connection reset by peer" Jan 27 18:28:51 crc kubenswrapper[4907]: W0127 18:28:51.536261 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c24523b_b339_4889_9af6_19c8ec0b1048.slice/crio-6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7 WatchSource:0}: Error finding container 6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7: Status 404 returned error can't find the container with id 6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7 Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.543231 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.676970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.806019 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" path="/var/lib/kubelet/pods/026a73e3-c86f-49dd-b04d-e8208c9ce9e2/volumes" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.806874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.831675 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:51 crc kubenswrapper[4907]: W0127 18:28:51.882382 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb34862c_021c_4e5e_b4c0_ceffb9222438.slice/crio-e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9 WatchSource:0}: Error finding container e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9: Status 404 returned error can't find the container with id e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9 Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.923830 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7"} Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.957040 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerStarted","Data":"9302d404bfba0eee2d8da4cca550efe55dc895f1186f50f082b7659191ac1d96"} Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.975479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"d0dcb4cda3e336fc0ccb50a2d68dd9abd7cba58c7276a287ff928b11ff880a1e"} Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.012609 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" exitCode=0 Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.012669 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7"} Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.025865 4907 generic.go:334] "Generic (PLEG): container finished" podID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" exitCode=0 Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.025913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.055966 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": dial tcp 10.217.0.195:9696: connect: connection refused" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061749 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"ae5602f58ce30fbc6eef4a10ecf3cb4b9657ca313c72b5dfd16c2fed7fc614ae"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061798 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"b1ac04422082733d676bd4d7806ca0f76ee46da4044056870033680fca782d4b"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.076710 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.107634 4907 generic.go:334] "Generic (PLEG): container finished" podID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerID="a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b" exitCode=0 Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.107917 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.131380 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.153267 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c6c685b5-88m65" podStartSLOduration=3.153247118 podStartE2EDuration="3.153247118s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:53.123160746 +0000 UTC m=+1388.252443358" watchObservedRunningTime="2026-01-27 18:28:53.153247118 +0000 UTC m=+1388.282529730" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.263036 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" exitCode=0 Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.263298 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.319587 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.346819 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388819 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388866 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388927 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389090 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389106 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.407971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.428776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27" (OuterVolumeSpecName: "kube-api-access-zmv27") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "kube-api-access-zmv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.449089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts" (OuterVolumeSpecName: "scripts") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495316 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495343 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495353 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495363 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.514333 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.597262 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: E0127 18:28:53.630316 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle podName:472bdc20-aa30-4204-b7ef-ef2604ebc83f nodeName:}" failed. No retries permitted until 2026-01-27 18:28:54.130289874 +0000 UTC m=+1389.259572486 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f") : error deleting /var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volume-subpaths: remove /var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volume-subpaths: no such file or directory Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.637659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data" (OuterVolumeSpecName: "config-data") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.699790 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.498810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.525620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.550534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerStarted","Data":"09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.550604 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"f21eb03c891c1ca372e748b6131d8a4413d1eb5b66cad03fa9fdb685e87a089a"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562352 4907 scope.go:117] "RemoveContainer" containerID="afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562381 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.576069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.598336 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podStartSLOduration=4.598314195 podStartE2EDuration="4.598314195s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:54.576283424 +0000 UTC m=+1389.705566046" watchObservedRunningTime="2026-01-27 18:28:54.598314195 +0000 UTC m=+1389.727596807" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.601449 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.684800 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.693703 4907 scope.go:117] "RemoveContainer" containerID="eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.711883 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.734288 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735009 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735031 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735055 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735066 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735093 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735102 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735140 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735149 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735478 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735499 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735522 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735535 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.739250 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.742509 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.742570 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.745223 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.782757 4907 scope.go:117] "RemoveContainer" containerID="e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807433 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807661 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909804 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909901 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.910337 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.911718 4907 scope.go:117] "RemoveContainer" containerID="696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.913933 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.923274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.930658 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.942691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.944008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.951810 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.172938 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.359718 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:36646->10.217.0.203:9311: read: connection reset by peer" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.360280 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:36656->10.217.0.203:9311: read: connection reset by peer" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.532495 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.595659 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerID="58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" exitCode=0 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.595754 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.598503 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.602907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603031 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" containerID="cri-o://3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" gracePeriod=30 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603289 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" containerID="cri-o://6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" gracePeriod=30 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.607673 4907 generic.go:334] "Generic (PLEG): container finished" podID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" exitCode=0 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608722 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"8568e05f8defdc45da4b5e782f2fecb663f59f14a17d880cf92f38ca7f0d7c34"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608985 4907 scope.go:117] "RemoveContainer" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.620238 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.573054781 podStartE2EDuration="5.62021594s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="2026-01-27 18:28:51.540549998 +0000 UTC m=+1386.669832610" lastFinishedPulling="2026-01-27 18:28:52.587711157 +0000 UTC m=+1387.716993769" observedRunningTime="2026-01-27 18:28:55.619124609 +0000 UTC m=+1390.748407221" watchObservedRunningTime="2026-01-27 18:28:55.62021594 +0000 UTC m=+1390.749498572" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.663073 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.663048677 podStartE2EDuration="5.663048677s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:55.654998627 +0000 UTC m=+1390.784281239" watchObservedRunningTime="2026-01-27 18:28:55.663048677 +0000 UTC m=+1390.792331289" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.672987 4907 scope.go:117] "RemoveContainer" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.719840 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.727776 4907 scope.go:117] "RemoveContainer" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: E0127 18:28:55.729129 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": container with ID starting with 5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a not found: ID does not exist" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.729187 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} err="failed to get container status \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": rpc error: code = NotFound desc = could not find container \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": container with ID starting with 5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a not found: ID does not exist" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.729219 4907 scope.go:117] "RemoveContainer" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: E0127 18:28:55.730089 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": container with ID starting with 8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78 not found: ID does not exist" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.730119 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} err="failed to get container status \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": rpc error: code = NotFound desc = could not find container \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": container with ID starting with 8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78 not found: ID does not exist" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734356 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734444 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734469 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734600 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.750911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7" (OuterVolumeSpecName: "kube-api-access-zt9q7") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "kube-api-access-zt9q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.754667 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.834754 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" path="/var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volumes" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.840321 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.840594 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.852045 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.884359 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.887374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config" (OuterVolumeSpecName: "config") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.932397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943245 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943590 4907 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943625 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943639 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.949440 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.042576 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.045132 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.149771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150274 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150410 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150472 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150586 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.151890 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs" (OuterVolumeSpecName: "logs") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.172658 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.194820 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t" (OuterVolumeSpecName: "kube-api-access-8nn5t") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "kube-api-access-8nn5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.252856 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.262610 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.262860 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.263571 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.338138 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data" (OuterVolumeSpecName: "config-data") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.355929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:56 crc kubenswrapper[4907]: E0127 18:28:56.368721 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35073d6_fb6a_4896_8275_9f3632f0cd2f.slice/crio-conmon-6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35073d6_fb6a_4896_8275_9f3632f0cd2f.slice/crio-6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.381306 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.381348 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.385580 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.631813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.632257 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"0dcda993740684fc7c475d953a4c78e14e7f5c8fbe1eb87431b09fbc9bf63899"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637222 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637699 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"23d12280d966edcb48634c6a46a4a55471151b9b7f710d4d1606e12380bba1d5"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637770 4907 scope.go:117] "RemoveContainer" containerID="58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639517 4907 generic.go:334] "Generic (PLEG): container finished" podID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerID="6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" exitCode=0 Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639539 4907 generic.go:334] "Generic (PLEG): container finished" podID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerID="3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" exitCode=143 Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639695 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.731774 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.748441 4907 scope.go:117] "RemoveContainer" containerID="086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.761852 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.781600 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796617 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796721 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796817 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796924 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797143 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797489 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.798250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs" (OuterVolumeSpecName: "logs") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.800316 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.800345 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.814784 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq" (OuterVolumeSpecName: "kube-api-access-7dfjq") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "kube-api-access-7dfjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.817844 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts" (OuterVolumeSpecName: "scripts") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.839573 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.882271 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data" (OuterVolumeSpecName: "config-data") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902081 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902122 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902134 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902147 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.944857 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.004629 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"d0dcb4cda3e336fc0ccb50a2d68dd9abd7cba58c7276a287ff928b11ff880a1e"} Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658799 4907 scope.go:117] "RemoveContainer" containerID="6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658535 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.674040 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9"} Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.687212 4907 scope.go:117] "RemoveContainer" containerID="3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.709094 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.733754 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.788052 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" path="/var/lib/kubelet/pods/18fa0523-c08a-427c-b27e-77543fe4bd94/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.805399 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" path="/var/lib/kubelet/pods/9c425059-b69d-4bf6-ab4b-3c942d87f1a3/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.812522 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" path="/var/lib/kubelet/pods/f35073d6-fb6a-4896-8275-9f3632f0cd2f/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.813446 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.814409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.814908 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.815057 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.815115 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.818987 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819111 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819193 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819264 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819472 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819608 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819704 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820678 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820793 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820886 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820971 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.821081 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.821168 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.826027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.826222 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829419 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829531 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943598 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943662 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943712 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943780 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943810 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943855 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943883 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.944008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.944036 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046537 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046606 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046685 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046743 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046769 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.047297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.050997 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.051209 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.051664 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.053490 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.054088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.062567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.067153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.153410 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.647858 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.684273 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"612ef568f992f05d9102a33aa03ceb4dd3f384fd3f8ed31e2abf3a65c0144486"} Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.687616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f"} Jan 27 18:28:59 crc kubenswrapper[4907]: I0127 18:28:59.706081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"42ed3b791bfec87220988dd112bce9c37d8fc35aeb5c1ffe2f82f2eda67cbffd"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.504157 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.542721 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.622589 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.622829 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" containerID="cri-o://a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" gracePeriod=10 Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.738401 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"5a595c94eabb08aabaf4e564cad10dc747a594f72a3a9c5050bb39af7be3d027"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.739932 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.755873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.756832 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.795383 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.795365334 podStartE2EDuration="3.795365334s" podCreationTimestamp="2026-01-27 18:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:00.791600697 +0000 UTC m=+1395.920883309" watchObservedRunningTime="2026-01-27 18:29:00.795365334 +0000 UTC m=+1395.924647946" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.847249 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.980145508 podStartE2EDuration="6.84723014s" podCreationTimestamp="2026-01-27 18:28:54 +0000 UTC" firstStartedPulling="2026-01-27 18:28:55.747648561 +0000 UTC m=+1390.876931173" lastFinishedPulling="2026-01-27 18:28:59.614733193 +0000 UTC m=+1394.744015805" observedRunningTime="2026-01-27 18:29:00.846455348 +0000 UTC m=+1395.975737970" watchObservedRunningTime="2026-01-27 18:29:00.84723014 +0000 UTC m=+1395.976512752" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.083743 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.186658 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.748947 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768018 4907 generic.go:334] "Generic (PLEG): container finished" podID="4eb40734-63ad-481e-8830-da770faf9a95" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" exitCode=0 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768114 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"9696ceb9132fbe70ed57d119137a4111952838c23ff4de6225536d7aaf063783"} Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768244 4907 scope.go:117] "RemoveContainer" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.769097 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" containerID="cri-o://f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" gracePeriod=30 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.769250 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" containerID="cri-o://270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" gracePeriod=30 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835255 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835313 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835453 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.840273 4907 scope.go:117] "RemoveContainer" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.847325 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt" (OuterVolumeSpecName: "kube-api-access-4thvt") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "kube-api-access-4thvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.919911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.930505 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.943137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944020 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944054 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944065 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944073 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.950242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config" (OuterVolumeSpecName: "config") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.965070 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.047026 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.047066 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055006 4907 scope.go:117] "RemoveContainer" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:02 crc kubenswrapper[4907]: E0127 18:29:02.055769 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": container with ID starting with a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8 not found: ID does not exist" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055825 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} err="failed to get container status \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": rpc error: code = NotFound desc = could not find container \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": container with ID starting with a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8 not found: ID does not exist" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055860 4907 scope.go:117] "RemoveContainer" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:02 crc kubenswrapper[4907]: E0127 18:29:02.056237 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": container with ID starting with 9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f not found: ID does not exist" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.056260 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f"} err="failed to get container status \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": rpc error: code = NotFound desc = could not find container \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": container with ID starting with 9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f not found: ID does not exist" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.109258 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.119701 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.787240 4907 generic.go:334] "Generic (PLEG): container finished" podID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerID="270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" exitCode=0 Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.787350 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1"} Jan 27 18:29:03 crc kubenswrapper[4907]: I0127 18:29:03.789475 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb40734-63ad-481e-8830-da770faf9a95" path="/var/lib/kubelet/pods/4eb40734-63ad-481e-8830-da770faf9a95/volumes" Jan 27 18:29:04 crc kubenswrapper[4907]: I0127 18:29:04.720454 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:29:04 crc kubenswrapper[4907]: I0127 18:29:04.725078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.210336 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.839064 4907 generic.go:334] "Generic (PLEG): container finished" podID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerID="f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" exitCode=0 Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.839282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73"} Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.126344 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252464 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.253107 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.253525 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.256322 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.258895 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7" (OuterVolumeSpecName: "kube-api-access-zwkv7") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "kube-api-access-zwkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.258966 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts" (OuterVolumeSpecName: "scripts") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.263614 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.339492 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358196 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358235 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358249 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358260 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.387250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data" (OuterVolumeSpecName: "config-data") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.460240 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850387 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7"} Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850454 4907 scope.go:117] "RemoveContainer" containerID="270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850489 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.882135 4907 scope.go:117] "RemoveContainer" containerID="f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.909939 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.927604 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.948871 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949398 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949414 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949443 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949451 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949469 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949478 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949526 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="init" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949534 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="init" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949794 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949824 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949843 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.951305 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.954634 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.964548 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.072587 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.072977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073132 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073296 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073403 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175591 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175796 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175834 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175870 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.176069 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.180096 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.180114 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.184077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.185156 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.201063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.275485 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.761844 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" path="/var/lib/kubelet/pods/2c24523b-b339-4889-9af6-19c8ec0b1048/volumes" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.814166 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.868024 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"80e1b77affd59551c53daf19ceda8c74435235ef679c23448735290450f5e301"} Jan 27 18:29:08 crc kubenswrapper[4907]: I0127 18:29:08.885324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.136850 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.138362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147217 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147436 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kzl48" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.184505 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.217871 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320177 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320345 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320439 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.321997 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.326518 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.328568 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.340213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.389711 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.390775 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.422252 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.455328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.457146 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.479249 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.526040 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: E0127 18:29:09.588127 4907 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 18:29:09 crc kubenswrapper[4907]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6eadebaf-c7ae-4b1a-9917-b00dd0e25125_0(0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c" Netns:"/var/run/netns/f3d4b7f9-0fb5-4f55-b9e3-bed143bb9417" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c;K8S_POD_UID=6eadebaf-c7ae-4b1a-9917-b00dd0e25125" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6eadebaf-c7ae-4b1a-9917-b00dd0e25125]: expected pod UID "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" but got "8cea1342-da85-42e5-a54b-98b132f7871f" from Kube API Jan 27 18:29:09 crc kubenswrapper[4907]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 18:29:09 crc kubenswrapper[4907]: > Jan 27 18:29:09 crc kubenswrapper[4907]: E0127 18:29:09.588194 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 18:29:09 crc kubenswrapper[4907]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6eadebaf-c7ae-4b1a-9917-b00dd0e25125_0(0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c" Netns:"/var/run/netns/f3d4b7f9-0fb5-4f55-b9e3-bed143bb9417" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c;K8S_POD_UID=6eadebaf-c7ae-4b1a-9917-b00dd0e25125" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6eadebaf-c7ae-4b1a-9917-b00dd0e25125]: expected pod UID "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" but got "8cea1342-da85-42e5-a54b-98b132f7871f" from Kube API Jan 27 18:29:09 crc kubenswrapper[4907]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 18:29:09 crc kubenswrapper[4907]: > pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627440 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627474 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.628204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.631863 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.631867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.646279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.840787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.899158 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.903447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"044115f65d6920ca496f342a6b343de4121660bd3773415df6035ea6c5f9cfd3"} Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.959939 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.959914575 podStartE2EDuration="3.959914575s" podCreationTimestamp="2026-01-27 18:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:09.947828829 +0000 UTC m=+1405.077111441" watchObservedRunningTime="2026-01-27 18:29:09.959914575 +0000 UTC m=+1405.089197187" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.999526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.005813 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137337 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137714 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137768 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.138207 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.138835 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.143350 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.143990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh" (OuterVolumeSpecName: "kube-api-access-7rpnh") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "kube-api-access-7rpnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.155644 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245379 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245417 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245447 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.384534 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.827032 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.931704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8cea1342-da85-42e5-a54b-98b132f7871f","Type":"ContainerStarted","Data":"3867807b82d8a9c03c0d7533083bbdbba4c5bfc337fae8b64ab63248bc7ef586"} Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.931855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.966642 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:11 crc kubenswrapper[4907]: I0127 18:29:11.761606 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" path="/var/lib/kubelet/pods/6eadebaf-c7ae-4b1a-9917-b00dd0e25125/volumes" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.253067 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.254858 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.257496 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.257765 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.260411 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-865rb" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.270084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.276463 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.343237 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.345334 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.386972 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406805 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406937 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406971 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406990 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407035 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407140 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.408144 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.422774 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.428174 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.438638 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.441656 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.443789 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.475159 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.510973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511471 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512271 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512451 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512577 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512841 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512914 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513176 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.514949 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.515632 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.520817 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.521223 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.521622 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.528251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.528417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.536080 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.541632 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.545028 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.557476 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.582433 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.588401 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618928 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619043 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619078 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.626391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.626877 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.633296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.634725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.635600 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.642926 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.644562 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.671465 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.677148 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.742901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.807348 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.478457 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.494494 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.652924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.717328 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.979052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerStarted","Data":"fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.980566 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerStarted","Data":"78bb94542d4d4bf3a6f813c5ed68ef097d08a2c24ab72fcdea247f0ea0fd3815"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.981981 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerStarted","Data":"da74bdc5a99774c3a69253cc4dea9e02dc1b8da7d8cfa07a92eca32b5b2b99df"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.986691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerStarted","Data":"52f3d532c41726df5137a369b3af84663e53079bc95d251a6728a34657806803"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.025250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerStarted","Data":"4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.025797 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.042231 4907 generic.go:334] "Generic (PLEG): container finished" podID="719784a4-cead-4054-ac6b-e7e45118be8c" containerID="838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155" exitCode=0 Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.042271 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.058304 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podStartSLOduration=3.058281381 podStartE2EDuration="3.058281381s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:15.04884 +0000 UTC m=+1410.178122612" watchObservedRunningTime="2026-01-27 18:29:15.058281381 +0000 UTC m=+1410.187563993" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.067873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerStarted","Data":"317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.068383 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.070090 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerStarted","Data":"db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.070161 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.072683 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerStarted","Data":"f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.073506 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.126152 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-868bfd5587-xkz6n" podStartSLOduration=2.221826619 podStartE2EDuration="5.126132079s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="2026-01-27 18:29:13.647166756 +0000 UTC m=+1408.776449368" lastFinishedPulling="2026-01-27 18:29:16.551472216 +0000 UTC m=+1411.680754828" observedRunningTime="2026-01-27 18:29:17.091993311 +0000 UTC m=+1412.221275923" watchObservedRunningTime="2026-01-27 18:29:17.126132079 +0000 UTC m=+1412.255414691" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.127492 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podStartSLOduration=5.127485758 podStartE2EDuration="5.127485758s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:17.1177832 +0000 UTC m=+1412.247065812" watchObservedRunningTime="2026-01-27 18:29:17.127485758 +0000 UTC m=+1412.256768370" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.162202 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podStartSLOduration=2.077912046 podStartE2EDuration="5.162178242s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="2026-01-27 18:29:13.472334967 +0000 UTC m=+1408.601617579" lastFinishedPulling="2026-01-27 18:29:16.556601163 +0000 UTC m=+1411.685883775" observedRunningTime="2026-01-27 18:29:17.132688977 +0000 UTC m=+1412.261971599" watchObservedRunningTime="2026-01-27 18:29:17.162178242 +0000 UTC m=+1412.291460854" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.562664 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.206584 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.209843 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.220312 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.273703 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.273769 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.274032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.375799 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.375958 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376377 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376426 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.395692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.534414 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.933266 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.936138 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.940955 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.941430 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.941630 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.958032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064189 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064275 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064378 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.068154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.068515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171175 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171236 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171281 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171690 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171708 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171747 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.173212 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.173725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.194625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.194720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.195188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.196372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.196728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.208345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.263313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.284635 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.286510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.313529 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.315017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.365088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378435 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378473 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378586 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.415835 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.432013 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.433927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.481152 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487717 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488001 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488408 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488750 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488852 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.495277 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498334 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498764 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.499704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.510617 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.534454 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.540236 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.543370 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591357 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.596496 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.599118 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.602254 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.616094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.658108 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.672671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.779028 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.197440 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.286355 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.286681 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59cf67488d-dzx5l" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" containerID="cri-o://2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" gracePeriod=30 Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.287277 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59cf67488d-dzx5l" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" containerID="cri-o://7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.173373 4907 generic.go:334] "Generic (PLEG): container finished" podID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerID="7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" exitCode=0 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.173472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3"} Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364014 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364313 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" containerID="cri-o://57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364430 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" containerID="cri-o://407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.690232 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.761168 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.761465 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" containerID="cri-o://09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" gracePeriod=10 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.196960 4907 generic.go:334] "Generic (PLEG): container finished" podID="26ebee0c-64db-4384-9e27-95691ee28a17" containerID="57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" exitCode=143 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.197022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4"} Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.200461 4907 generic.go:334] "Generic (PLEG): container finished" podID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerID="09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" exitCode=0 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.200505 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d"} Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.484023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.484285 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" containerID="cri-o://317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" gracePeriod=60 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.501723 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.501978 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" containerID="cri-o://db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" gracePeriod=60 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.514463 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.514691 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.528764 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.532371 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.532526 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.539500 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.550064 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.554890 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.555202 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.559576 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.561127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.573041 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.573048 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.579836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.579951 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580019 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580118 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580162 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.587673 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.601601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.603705 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:56772->10.217.0.216:8004: read: connection reset by peer" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682265 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682320 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.690852 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.695698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.697112 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.700629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.700840 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.702230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784274 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784619 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784779 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.793408 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.794354 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.794609 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.795239 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.795542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.810387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.857738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.882749 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.961117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962785 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" containerID="cri-o://30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962830 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" containerID="cri-o://b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962785 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" containerID="cri-o://be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.969138 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" containerID="cri-o://4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.988356 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": EOF" Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.173963 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.254198 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" exitCode=0 Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.254472 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" exitCode=2 Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.255694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6"} Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.256071 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f"} Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.542014 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.269204 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" exitCode=0 Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.269283 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516"} Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.272590 4907 generic.go:334] "Generic (PLEG): container finished" podID="26ebee0c-64db-4384-9e27-95691ee28a17" containerID="407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" exitCode=0 Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.272721 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be"} Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.677675 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.196:9292/healthcheck\": dial tcp 10.217.0.196:9292: connect: connection refused" Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.677852 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9292/healthcheck\": dial tcp 10.217.0.196:9292: connect: connection refused" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.888589 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.889226 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdhb5hd4hf9h66fh5cdhd7h566h684hb6h668h64ch5b8hd5h674h5f6h657h576h5bh57fh699h7dh85h586h57fh9ch67fhcdh686h56h5f9h59bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8x9xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(8cea1342-da85-42e5-a54b-98b132f7871f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.892568 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.378296 4907 generic.go:334] "Generic (PLEG): container finished" podID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerID="2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" exitCode=0 Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.379511 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1"} Jan 27 18:29:28 crc kubenswrapper[4907]: E0127 18:29:28.397917 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.743898 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924747 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924974 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.925114 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.931065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b" (OuterVolumeSpecName: "kube-api-access-clx4b") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "kube-api-access-clx4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.931792 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.986944 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:56774->10.217.0.216:8004: read: connection reset by peer" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.008322 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028359 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028409 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028427 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.036644 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.065581 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config" (OuterVolumeSpecName: "config") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.129735 4907 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.129776 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.228022 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.235958 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336370 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336435 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336473 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337188 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337889 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337920 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337988 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.338940 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.342067 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs" (OuterVolumeSpecName: "logs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.349543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts" (OuterVolumeSpecName: "scripts") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.349625 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s" (OuterVolumeSpecName: "kube-api-access-d8p2s") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "kube-api-access-d8p2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.391293 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393652 4907 scope.go:117] "RemoveContainer" containerID="407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393848 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.400106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.400482 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421386 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (OuterVolumeSpecName: "glance") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421720 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"9302d404bfba0eee2d8da4cca550efe55dc895f1186f50f082b7659191ac1d96"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421805 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.430763 4907 generic.go:334] "Generic (PLEG): container finished" podID="e984f28b-ac80-459a-9dd3-8faa56796324" containerID="317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" exitCode=0 Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.430807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerDied","Data":"317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.440977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.445335 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: W0127 18:29:29.445426 4907 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/26ebee0c-64db-4384-9e27-95691ee28a17/volumes/kubernetes.io~secret/internal-tls-certs Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.445461 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446082 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.447107 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.447173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452016 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452056 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452069 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452103 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452120 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452132 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.475747 4907 scope.go:117] "RemoveContainer" containerID="57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.477132 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm" (OuterVolumeSpecName: "kube-api-access-dtqwm") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "kube-api-access-dtqwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.491509 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data" (OuterVolumeSpecName: "config-data") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.514308 4907 scope.go:117] "RemoveContainer" containerID="7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.530035 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.537073 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.535587 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.544797 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.544980 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8") on node "crc" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554018 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.555884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557262 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557294 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557308 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557323 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.561712 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.563216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg" (OuterVolumeSpecName: "kube-api-access-fv4gg") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "kube-api-access-fv4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.577668 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.584307 4907 scope.go:117] "RemoveContainer" containerID="2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.592130 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.628833 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config" (OuterVolumeSpecName: "config") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.631411 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.653356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.657273 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663691 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663729 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663743 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663757 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663768 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663778 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663793 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663911 4907 scope.go:117] "RemoveContainer" containerID="09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.681279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.698886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.703692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data" (OuterVolumeSpecName: "config-data") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.715089 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.718779 4907 scope.go:117] "RemoveContainer" containerID="a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.730508 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.766496 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.805193 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" path="/var/lib/kubelet/pods/90a7953e-f884-40eb-a25f-356aefbc6b83/volumes" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807490 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807602 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807662 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.896903 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.940826 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.963244 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": read tcp 10.217.0.2:40084->10.217.0.217:8000: read: connection reset by peer" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.969634 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970181 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970201 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970212 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970219 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970242 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970249 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970260 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="init" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970266 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="init" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970287 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970300 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970305 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970330 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970336 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970523 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970537 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970544 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970577 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970590 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970612 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.971808 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.977377 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.979572 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.985978 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.016840 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.057424 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.116977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117057 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117122 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117148 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117165 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117531 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117753 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221238 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221661 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.222492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.223074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230887 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230915 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.228754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.237465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.237504 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.244399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.298602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.474959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerStarted","Data":"7fc94a6ead06a9cce9ab07e0158546ba301eccc8d2ee6e6136d473c9cfe6314a"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.508511 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerStarted","Data":"331b532045c147969d3177834aceba27cb761565c843a60c7c50b5dded09e0dd"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.514646 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerStarted","Data":"932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.522123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerStarted","Data":"982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.528523 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"1cda97c7c8fb2238d89fddf9156984f90f05b47233ef53653849986540f6e310"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.544747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"02a7ef787ad2af55aee76003ed5f2c734d79246a8b02f7fc6a11cdc00fcff410"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.550390 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.551350 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerStarted","Data":"04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.563996 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerDied","Data":"78bb94542d4d4bf3a6f813c5ed68ef097d08a2c24ab72fcdea247f0ea0fd3815"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.564049 4907 scope.go:117] "RemoveContainer" containerID="317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.564088 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.589166 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" exitCode=0 Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.589255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.630990 4907 generic.go:334] "Generic (PLEG): container finished" podID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerID="db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" exitCode=0 Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.631211 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerDied","Data":"db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.656286 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.662587 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.672829 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.736734 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.736905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.737000 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.737414 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.752628 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.755267 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9" (OuterVolumeSpecName: "kube-api-access-g4rp9") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "kube-api-access-g4rp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.760075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839254 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839318 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839513 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.840930 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.840953 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.845397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.847183 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.852767 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92" (OuterVolumeSpecName: "kube-api-access-kjd92") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "kube-api-access-kjd92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957140 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957184 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957192 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.021413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts" (OuterVolumeSpecName: "scripts") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.061323 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.315463 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.606691 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.624272 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.632768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data" (OuterVolumeSpecName: "config-data") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"0dcda993740684fc7c475d953a4c78e14e7f5c8fbe1eb87431b09fbc9bf63899"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682417 4907 scope.go:117] "RemoveContainer" containerID="b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682569 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.696961 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerDied","Data":"da74bdc5a99774c3a69253cc4dea9e02dc1b8da7d8cfa07a92eca32b5b2b99df"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.697054 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.700903 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.702366 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.702402 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.727525 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.739370 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerStarted","Data":"7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.741582 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.773852 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-667f9867c-2tvqc" podStartSLOduration=8.77383231 podStartE2EDuration="8.77383231s" podCreationTimestamp="2026-01-27 18:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.763672911 +0000 UTC m=+1426.892955533" watchObservedRunningTime="2026-01-27 18:29:31.77383231 +0000 UTC m=+1426.903114922" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.775869 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" path="/var/lib/kubelet/pods/07c0995e-8815-4b0f-bea0-e278aca1a898/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.777377 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" path="/var/lib/kubelet/pods/26ebee0c-64db-4384-9e27-95691ee28a17/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.778775 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" path="/var/lib/kubelet/pods/e984f28b-ac80-459a-9dd3-8faa56796324/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.804283 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.818639 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" exitCode=0 Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.824679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data" (OuterVolumeSpecName: "config-data") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.847341 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.854048 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podStartSLOduration=8.854026043 podStartE2EDuration="8.854026043s" podCreationTimestamp="2026-01-27 18:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.806970313 +0000 UTC m=+1426.936252935" watchObservedRunningTime="2026-01-27 18:29:31.854026043 +0000 UTC m=+1426.983308655" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.906352 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.915870 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podStartSLOduration=11.915851553 podStartE2EDuration="11.915851553s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.887921778 +0000 UTC m=+1427.017204390" watchObservedRunningTime="2026-01-27 18:29:31.915851553 +0000 UTC m=+1427.045134165" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052667 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"625904e18e2df3ac8a2e8eb7cebe332c9d8345bb145b5aa0ae9e47e979fac9f0"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerStarted","Data":"2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052740 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052760 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerStarted","Data":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerStarted","Data":"9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052791 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"db7244065a8a454343422aa0144fde80cab3ae31707a00db86ef0860f2dcd4df"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.073750 4907 scope.go:117] "RemoveContainer" containerID="be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.225174 4907 scope.go:117] "RemoveContainer" containerID="30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.232152 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.251576 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.272181 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.303673 4907 scope.go:117] "RemoveContainer" containerID="4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.334150 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.357813 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358782 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358800 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358820 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358826 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358861 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358866 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358883 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358888 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358901 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358906 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359142 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359159 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359171 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359182 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359190 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.363667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.374627 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.374879 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.394934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.407885 4907 scope.go:117] "RemoveContainer" containerID="db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433043 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433117 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535737 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535963 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.536120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.537653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.537978 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.548103 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.549287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.557459 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.557473 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.559601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.654613 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.692578 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.922891 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerStarted","Data":"f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.923914 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.949290 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"1b2e5927a1dda6d6eac67be0afa08a3ea1ef91530f32d480cd6b5f5930c5f5d1"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.952100 4907 generic.go:334] "Generic (PLEG): container finished" podID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.952306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.953220 4907 scope.go:117] "RemoveContainer" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.963413 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"3f7d89cf947f8a39ad3e952a39bb5ca4a60f28a71f4f01eac307c7a42fe3c341"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.963658 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.964082 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.985279 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-575dc845-lv7nr" podStartSLOduration=12.98525956 podStartE2EDuration="12.98525956s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:32.941818383 +0000 UTC m=+1428.071100995" watchObservedRunningTime="2026-01-27 18:29:32.98525956 +0000 UTC m=+1428.114542172" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989474 4907 generic.go:334] "Generic (PLEG): container finished" podID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989677 4907 generic.go:334] "Generic (PLEG): container finished" podID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989757 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989935 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.990463 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.990798 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.025624 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d47577fc9-fz5kg" podStartSLOduration=14.025602547 podStartE2EDuration="14.025602547s" podCreationTimestamp="2026-01-27 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:33.017172567 +0000 UTC m=+1428.146455179" watchObservedRunningTime="2026-01-27 18:29:33.025602547 +0000 UTC m=+1428.154885159" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.150873 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.151494 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" containerID="cri-o://1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" gracePeriod=30 Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.151801 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" containerID="cri-o://584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" gracePeriod=30 Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.159698 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:33 crc kubenswrapper[4907]: E0127 18:29:33.163465 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": container with ID starting with 45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7 not found: ID does not exist" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.163525 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} err="failed to get container status \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": rpc error: code = NotFound desc = could not find container \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": container with ID starting with 45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7 not found: ID does not exist" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.287486 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:33 crc kubenswrapper[4907]: W0127 18:29:33.289644 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509 WatchSource:0}: Error finding container ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509: Status 404 returned error can't find the container with id ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509 Jan 27 18:29:33 crc kubenswrapper[4907]: E0127 18:29:33.464222 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-conmon-1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.767906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" path="/var/lib/kubelet/pods/4c65559f-94dd-4b82-af1f-5d4c22c758c2/volumes" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.768620 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" path="/var/lib/kubelet/pods/fd086e93-3ba0-4f66-a848-e139b0eaaef1/volumes" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011616 4907 generic.go:334] "Generic (PLEG): container finished" podID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" exitCode=1 Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011994 4907 scope.go:117] "RemoveContainer" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.015627 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:34 crc kubenswrapper[4907]: E0127 18:29:34.016384 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.028049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.036201 4907 generic.go:334] "Generic (PLEG): container finished" podID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerID="1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" exitCode=143 Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.036279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.040746 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:34 crc kubenswrapper[4907]: E0127 18:29:34.044376 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.051717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.063674 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.067599 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"84e4d822154dac7b667eb71c0da0cc01f2e6ff2ccaf638c661063b7f1a73168b"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.070283 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:35 crc kubenswrapper[4907]: E0127 18:29:35.070569 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.104101 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.104085163 podStartE2EDuration="6.104085163s" podCreationTimestamp="2026-01-27 18:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:35.091986329 +0000 UTC m=+1430.221268951" watchObservedRunningTime="2026-01-27 18:29:35.104085163 +0000 UTC m=+1430.233367775" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.673431 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.673488 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.674320 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:35 crc kubenswrapper[4907]: E0127 18:29:35.674685 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.779426 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.779485 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:36 crc kubenswrapper[4907]: I0127 18:29:36.085180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} Jan 27 18:29:36 crc kubenswrapper[4907]: I0127 18:29:36.086139 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:36 crc kubenswrapper[4907]: E0127 18:29:36.086423 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.109542 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.116169 4907 generic.go:334] "Generic (PLEG): container finished" podID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerID="584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" exitCode=0 Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.116214 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536"} Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.546816 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607184 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607540 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608447 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.609096 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.610468 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.610973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs" (OuterVolumeSpecName: "logs") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.620786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts" (OuterVolumeSpecName: "scripts") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.624122 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s" (OuterVolumeSpecName: "kube-api-access-jdj7s") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "kube-api-access-jdj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.639440 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (OuterVolumeSpecName: "glance") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.673096 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713318 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713363 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713376 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713387 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713398 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.719947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.733421 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data" (OuterVolumeSpecName: "config-data") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.749489 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.749715 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1") on node "crc" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815752 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815782 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815791 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.942885 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138381 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde"} Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138456 4907 scope.go:117] "RemoveContainer" containerID="584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138452 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.175263 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.179930 4907 scope.go:117] "RemoveContainer" containerID="1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.194141 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.213759 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: E0127 18:29:38.214406 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214421 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: E0127 18:29:38.214468 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214480 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214767 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214789 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.216328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.221509 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.224760 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.225167 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329864 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330058 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330085 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.431616 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432201 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432775 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432982 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.433034 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.433625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438641 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438912 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.439985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.453342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.457861 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.458205 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.704123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.848447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:39 crc kubenswrapper[4907]: I0127 18:29:39.485617 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:39 crc kubenswrapper[4907]: I0127 18:29:39.776069 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" path="/var/lib/kubelet/pods/edc45c3a-8ebc-47ae-b823-b3013e4ea0df/volumes" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.169327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170077 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" containerID="cri-o://c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170323 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170697 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" containerID="cri-o://ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170748 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" containerID="cri-o://742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170782 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" containerID="cri-o://e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.189485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"d7b0b81f86060110a879285f5efa521d7667a77599d2b61007026678743cd8c9"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200779 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" exitCode=0 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200822 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200927 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.941267337 podStartE2EDuration="8.200909381s" podCreationTimestamp="2026-01-27 18:29:32 +0000 UTC" firstStartedPulling="2026-01-27 18:29:33.299969599 +0000 UTC m=+1428.429252211" lastFinishedPulling="2026-01-27 18:29:38.559611643 +0000 UTC m=+1433.688894255" observedRunningTime="2026-01-27 18:29:40.199076679 +0000 UTC m=+1435.328359311" watchObservedRunningTime="2026-01-27 18:29:40.200909381 +0000 UTC m=+1435.330191993" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.292418 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.293097 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.551977 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.552340 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.609134 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.610378 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.618443 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.711474 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.969329 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.025501 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.235692 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"1727c8156395d08f2124e173e1673ee384795cc87b27ab131267392b0b2e82b0"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.247709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254736 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" exitCode=0 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254766 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" exitCode=2 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254774 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" exitCode=0 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256221 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256234 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256746 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.280287 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7bnp" podStartSLOduration=14.31229163 podStartE2EDuration="23.280263401s" podCreationTimestamp="2026-01-27 18:29:18 +0000 UTC" firstStartedPulling="2026-01-27 18:29:31.821484786 +0000 UTC m=+1426.950767398" lastFinishedPulling="2026-01-27 18:29:40.789456557 +0000 UTC m=+1435.918739169" observedRunningTime="2026-01-27 18:29:41.271961094 +0000 UTC m=+1436.401243706" watchObservedRunningTime="2026-01-27 18:29:41.280263401 +0000 UTC m=+1436.409546013" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.448610 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.530783 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.530911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.531021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.531053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.538087 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.538271 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx" (OuterVolumeSpecName: "kube-api-access-f5phx") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "kube-api-access-f5phx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.608021 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.611832 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data" (OuterVolumeSpecName: "config-data") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637934 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637970 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637980 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637988 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.720134 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.842824 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.842997 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.843052 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.843083 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.846955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.847543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf" (OuterVolumeSpecName: "kube-api-access-z2bzf") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "kube-api-access-z2bzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.878206 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.915289 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data" (OuterVolumeSpecName: "config-data") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946744 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946778 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946789 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946797 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265340 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"331b532045c147969d3177834aceba27cb761565c843a60c7c50b5dded09e0dd"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265355 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265779 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.267630 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"c691ffc96d61c669523c5ce2a5a0e9fa09a4b7efc048d6f710d2fcb93c5c6cc5"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.273689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.277714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"7fc94a6ead06a9cce9ab07e0158546ba301eccc8d2ee6e6136d473c9cfe6314a"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.308309 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.340602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.340581429 podStartE2EDuration="4.340581429s" podCreationTimestamp="2026-01-27 18:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:42.305026487 +0000 UTC m=+1437.434309099" watchObservedRunningTime="2026-01-27 18:29:42.340581429 +0000 UTC m=+1437.469864041" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.345866 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.358161 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.369014 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.379446 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.291289 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.291673 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.771906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" path="/var/lib/kubelet/pods/356365d4-834b-4980-96b4-9640bc0e2ed1/volumes" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.773226 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" path="/var/lib/kubelet/pods/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8/volumes" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.316336 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8cea1342-da85-42e5-a54b-98b132f7871f","Type":"ContainerStarted","Data":"1c52a84f09ca2f0b331b4f48f557ce63cd9376f4f95ea9c34c966e8d60264536"} Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.339109 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.564807023 podStartE2EDuration="35.339089797s" podCreationTimestamp="2026-01-27 18:29:09 +0000 UTC" firstStartedPulling="2026-01-27 18:29:10.388936326 +0000 UTC m=+1405.518218938" lastFinishedPulling="2026-01-27 18:29:43.1632191 +0000 UTC m=+1438.292501712" observedRunningTime="2026-01-27 18:29:44.330821391 +0000 UTC m=+1439.460104003" watchObservedRunningTime="2026-01-27 18:29:44.339089797 +0000 UTC m=+1439.468372409" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.510848 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.510946 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.520356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506606 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506792 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506922 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.535098 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.536430 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.849412 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.849732 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.899690 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.903230 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.938234 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.943619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.943988 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944212 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944376 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944509 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944621 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944724 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.953248 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.953423 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.960837 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9" (OuterVolumeSpecName: "kube-api-access-dj6m9") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "kube-api-access-dj6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.970038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts" (OuterVolumeSpecName: "scripts") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.022751 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049163 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049200 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049212 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049222 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049230 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.093693 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.151162 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.181897 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data" (OuterVolumeSpecName: "config-data") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.252735 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.367906 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" exitCode=0 Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.369626 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372061 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509"} Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372086 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372111 4907 scope.go:117] "RemoveContainer" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372460 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.403753 4907 scope.go:117] "RemoveContainer" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.429550 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.446288 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.454473 4907 scope.go:117] "RemoveContainer" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.472659 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473274 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473291 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473314 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473335 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473344 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473365 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473372 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473408 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473416 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473442 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473462 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473470 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473775 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473796 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473810 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473830 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473846 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473859 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473871 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.474131 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.474144 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.474388 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.476378 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.484408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.490049 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.497266 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.518571 4907 scope.go:117] "RemoveContainer" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.539706 4907 scope.go:117] "RemoveContainer" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.541309 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": container with ID starting with ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257 not found: ID does not exist" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.541341 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} err="failed to get container status \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": rpc error: code = NotFound desc = could not find container \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": container with ID starting with ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.541361 4907 scope.go:117] "RemoveContainer" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.544367 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": container with ID starting with 742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097 not found: ID does not exist" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.544397 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} err="failed to get container status \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": rpc error: code = NotFound desc = could not find container \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": container with ID starting with 742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.544415 4907 scope.go:117] "RemoveContainer" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.547179 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": container with ID starting with e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c not found: ID does not exist" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.547215 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} err="failed to get container status \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": rpc error: code = NotFound desc = could not find container \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": container with ID starting with e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.547235 4907 scope.go:117] "RemoveContainer" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.549927 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": container with ID starting with c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2 not found: ID does not exist" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.549955 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} err="failed to get container status \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": rpc error: code = NotFound desc = could not find container \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": container with ID starting with c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.600381 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:29:49 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:29:49 crc kubenswrapper[4907]: > Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.661799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662069 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662290 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662523 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662776 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.760675 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" path="/var/lib/kubelet/pods/0c38d81c-140e-4516-b19c-8b58d7b25c43/volumes" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764818 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764906 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764939 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764960 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.768378 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.768446 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.776398 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.776812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.777063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.779186 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.800466 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.829880 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:50 crc kubenswrapper[4907]: W0127 18:29:50.377775 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9edfaf34_4000_4bda_9c1f_0f4afa06325b.slice/crio-014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079 WatchSource:0}: Error finding container 014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079: Status 404 returned error can't find the container with id 014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079 Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.420886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.697022 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.757137 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.757392 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" containerID="cri-o://4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" gracePeriod=60 Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.245890 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.248182 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.272031 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.323488 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.323655 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.340891 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.342575 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.357435 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.385116 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.387256 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.389716 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.391545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.425869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.425930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426091 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426146 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.427095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.443090 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.443122 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.445287 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3"} Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.445379 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079"} Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.461974 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.463576 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.469966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.482122 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528588 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528754 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.534660 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.534896 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.555710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.557784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.560682 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.560706 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.572999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.578214 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.585010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630730 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630936 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.631019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.632098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.650315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.662999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.713594 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.735065 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.735133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.736325 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.770276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.801802 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.803061 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.803167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.812941 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.849437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.851305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.851528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.954464 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.954679 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.955668 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.975109 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:51.981258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.164647 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.347514 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.501822 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92"} Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.512329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerStarted","Data":"24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506"} Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.559547 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.559714 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.575884 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.602327 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.627593 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.631443 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.633705 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.633819 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.639670 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.901099 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.949691 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.084851 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.325112 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.566187 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerStarted","Data":"34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.566655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerStarted","Data":"5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.592376 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerStarted","Data":"d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.599293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerStarted","Data":"9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.599342 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerStarted","Data":"aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.604843 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerStarted","Data":"be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.606730 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerStarted","Data":"f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.608177 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" podStartSLOduration=2.608154991 podStartE2EDuration="2.608154991s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.5926905 +0000 UTC m=+1448.721973112" watchObservedRunningTime="2026-01-27 18:29:53.608154991 +0000 UTC m=+1448.737437603" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.629622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerStarted","Data":"0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.629672 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerStarted","Data":"03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.678843 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-nlfm6" podStartSLOduration=2.678821113 podStartE2EDuration="2.678821113s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.641060568 +0000 UTC m=+1448.770343180" watchObservedRunningTime="2026-01-27 18:29:53.678821113 +0000 UTC m=+1448.808103725" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.700803 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" podStartSLOduration=2.7007869380000002 podStartE2EDuration="2.700786938s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.659707018 +0000 UTC m=+1448.788989640" watchObservedRunningTime="2026-01-27 18:29:53.700786938 +0000 UTC m=+1448.830069550" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.735167 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b6e2-account-create-update-fr784" podStartSLOduration=2.735144476 podStartE2EDuration="2.735144476s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.679025348 +0000 UTC m=+1448.808307960" watchObservedRunningTime="2026-01-27 18:29:53.735144476 +0000 UTC m=+1448.864427088" Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.639134 4907 generic.go:334] "Generic (PLEG): container finished" podID="1567baee-fe0b-481f-9aca-c424237d77fd" containerID="34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.639196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerDied","Data":"34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.641414 4907 generic.go:334] "Generic (PLEG): container finished" podID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerID="d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.641490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerDied","Data":"d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.643701 4907 generic.go:334] "Generic (PLEG): container finished" podID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerID="9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.643749 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerDied","Data":"9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.654037 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.660170 4907 generic.go:334] "Generic (PLEG): container finished" podID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerID="2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.660257 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerDied","Data":"2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.664227 4907 generic.go:334] "Generic (PLEG): container finished" podID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerID="5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.664323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerDied","Data":"5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.666637 4907 generic.go:334] "Generic (PLEG): container finished" podID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerID="0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.666701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerDied","Data":"0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.228611 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.294077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"743ace74-8ac2-43c7-807c-47379f8c50f4\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.294376 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"743ace74-8ac2-43c7-807c-47379f8c50f4\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.299797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "743ace74-8ac2-43c7-807c-47379f8c50f4" (UID: "743ace74-8ac2-43c7-807c-47379f8c50f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.300788 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg" (OuterVolumeSpecName: "kube-api-access-t9hrg") pod "743ace74-8ac2-43c7-807c-47379f8c50f4" (UID: "743ace74-8ac2-43c7-807c-47379f8c50f4"). InnerVolumeSpecName "kube-api-access-t9hrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.401055 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.401089 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.677982 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerDied","Data":"24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.678021 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.678037 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.680578 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681056 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" containerID="cri-o://72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" containerID="cri-o://f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681160 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" containerID="cri-o://f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681140 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" containerID="cri-o://6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.721424 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.871655501 podStartE2EDuration="6.721404265s" podCreationTimestamp="2026-01-27 18:29:49 +0000 UTC" firstStartedPulling="2026-01-27 18:29:50.380751005 +0000 UTC m=+1445.510033617" lastFinishedPulling="2026-01-27 18:29:55.230499769 +0000 UTC m=+1450.359782381" observedRunningTime="2026-01-27 18:29:55.70753387 +0000 UTC m=+1450.836816482" watchObservedRunningTime="2026-01-27 18:29:55.721404265 +0000 UTC m=+1450.850686867" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.044338 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.121395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"1567baee-fe0b-481f-9aca-c424237d77fd\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.121647 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"1567baee-fe0b-481f-9aca-c424237d77fd\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.122538 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1567baee-fe0b-481f-9aca-c424237d77fd" (UID: "1567baee-fe0b-481f-9aca-c424237d77fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.131647 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl" (OuterVolumeSpecName: "kube-api-access-q9lrl") pod "1567baee-fe0b-481f-9aca-c424237d77fd" (UID: "1567baee-fe0b-481f-9aca-c424237d77fd"). InnerVolumeSpecName "kube-api-access-q9lrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.225008 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.225084 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.443930 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.531183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.531689 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.534984 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" (UID: "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.574694 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz" (OuterVolumeSpecName: "kube-api-access-grbnz") pod "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" (UID: "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374"). InnerVolumeSpecName "kube-api-access-grbnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.653880 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.653919 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730602 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" exitCode=2 Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730663 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" exitCode=0 Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737710 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737817 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerDied","Data":"5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737851 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.740946 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerDied","Data":"aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.740980 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.741030 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.758320 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.767190 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.787808 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.861943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"9fd63a47-2bbf-455b-8732-8d489507a2a0\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862190 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"22bda35b-bb7e-40c5-a263-56fdb4a28784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862314 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862392 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"9fd63a47-2bbf-455b-8732-8d489507a2a0\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862522 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"22bda35b-bb7e-40c5-a263-56fdb4a28784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.865913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db79947d-82c1-4b66-8f0d-d34b96ff9a16" (UID: "db79947d-82c1-4b66-8f0d-d34b96ff9a16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.866401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fd63a47-2bbf-455b-8732-8d489507a2a0" (UID: "9fd63a47-2bbf-455b-8732-8d489507a2a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.866774 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22bda35b-bb7e-40c5-a263-56fdb4a28784" (UID: "22bda35b-bb7e-40c5-a263-56fdb4a28784"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.869469 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv" (OuterVolumeSpecName: "kube-api-access-s45fv") pod "9fd63a47-2bbf-455b-8732-8d489507a2a0" (UID: "9fd63a47-2bbf-455b-8732-8d489507a2a0"). InnerVolumeSpecName "kube-api-access-s45fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.870869 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8" (OuterVolumeSpecName: "kube-api-access-628s8") pod "db79947d-82c1-4b66-8f0d-d34b96ff9a16" (UID: "db79947d-82c1-4b66-8f0d-d34b96ff9a16"). InnerVolumeSpecName "kube-api-access-628s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.874233 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv" (OuterVolumeSpecName: "kube-api-access-tlxbv") pod "22bda35b-bb7e-40c5-a263-56fdb4a28784" (UID: "22bda35b-bb7e-40c5-a263-56fdb4a28784"). InnerVolumeSpecName "kube-api-access-tlxbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966627 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966659 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966679 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966688 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966698 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966707 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.762066 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.764358 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.766592 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.824624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerDied","Data":"be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.824895 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825043 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerDied","Data":"f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825603 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerDied","Data":"03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825786 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0" Jan 27 18:29:59 crc kubenswrapper[4907]: I0127 18:29:59.626849 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:29:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:29:59 crc kubenswrapper[4907]: > Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162092 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162603 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162621 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162665 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162674 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162691 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162698 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162715 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162740 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162769 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162778 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162793 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162800 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163061 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163090 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163099 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163109 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163126 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163132 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.164071 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.166529 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.166649 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.178136 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.249807 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.249962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.250832 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352630 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.353590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.358169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.373273 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.489106 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.835220 4907 generic.go:334] "Generic (PLEG): container finished" podID="c735324d-bfae-4fc6-bde7-081be56ed371" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" exitCode=0 Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.835285 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerDied","Data":"4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.014319 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:01 crc kubenswrapper[4907]: W0127 18:30:01.035505 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode18727dc_e815_4722_bbce_4bfe5a8ee4f2.slice/crio-6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f WatchSource:0}: Error finding container 6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f: Status 404 returned error can't find the container with id 6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.125745 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277662 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277813 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277853 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277992 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.328899 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs" (OuterVolumeSpecName: "kube-api-access-krdxs") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "kube-api-access-krdxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.328946 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.348843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382633 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382667 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382679 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.391687 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data" (OuterVolumeSpecName: "config-data") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.484710 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.857904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerDied","Data":"52f3d532c41726df5137a369b3af84663e53079bc95d251a6728a34657806803"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.857980 4907 scope.go:117] "RemoveContainer" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.858234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.861858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerStarted","Data":"ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.861913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerStarted","Data":"6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.961392 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.973767 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.022763 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:02 crc kubenswrapper[4907]: E0127 18:30:02.023356 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.023381 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.023679 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.024663 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.026818 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.027483 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mp6sz" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.027614 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.039170 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101609 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205686 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205973 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.206001 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.211109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.215142 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.215164 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.229050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.342389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: W0127 18:30:02.868330 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0230a81d_2f87_4ad2_a9b5_19cfd369f0b4.slice/crio-81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69 WatchSource:0}: Error finding container 81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69: Status 404 returned error can't find the container with id 81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.886389 4907 generic.go:334] "Generic (PLEG): container finished" podID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerID="ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3" exitCode=0 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.886659 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerDied","Data":"ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3"} Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.892698 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" exitCode=0 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.892772 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3"} Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.909670 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.333902 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450409 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450757 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.451754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.472046 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p" (OuterVolumeSpecName: "kube-api-access-vq28p") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "kube-api-access-vq28p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.501905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553685 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553714 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553725 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.777422 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" path="/var/lib/kubelet/pods/c735324d-bfae-4fc6-bde7-081be56ed371/volumes" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.926712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerStarted","Data":"81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69"} Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930362 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerDied","Data":"6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f"} Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930401 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930455 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:09 crc kubenswrapper[4907]: I0127 18:30:09.585258 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:30:09 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:30:09 crc kubenswrapper[4907]: > Jan 27 18:30:12 crc kubenswrapper[4907]: I0127 18:30:12.038248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerStarted","Data":"bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39"} Jan 27 18:30:12 crc kubenswrapper[4907]: I0127 18:30:12.071363 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" podStartSLOduration=2.421057437 podStartE2EDuration="11.071337414s" podCreationTimestamp="2026-01-27 18:30:01 +0000 UTC" firstStartedPulling="2026-01-27 18:30:02.872451958 +0000 UTC m=+1458.001734570" lastFinishedPulling="2026-01-27 18:30:11.522731935 +0000 UTC m=+1466.652014547" observedRunningTime="2026-01-27 18:30:12.053675192 +0000 UTC m=+1467.182957804" watchObservedRunningTime="2026-01-27 18:30:12.071337414 +0000 UTC m=+1467.200620046" Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.586224 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:30:19 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:30:19 crc kubenswrapper[4907]: > Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.830125 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.834162 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:30:23 crc kubenswrapper[4907]: I0127 18:30:23.180518 4907 generic.go:334] "Generic (PLEG): container finished" podID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerID="bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39" exitCode=0 Jan 27 18:30:23 crc kubenswrapper[4907]: I0127 18:30:23.180600 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerDied","Data":"bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39"} Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.597292 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:24 crc kubenswrapper[4907]: E0127 18:30:24.598186 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.598202 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.598436 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.599167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.613267 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.627536 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.629099 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.632434 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.651908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672417 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.675442 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.773792 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.773865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774164 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774739 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774786 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.776025 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.778227 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.783139 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s" (OuterVolumeSpecName: "kube-api-access-8gc6s") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "kube-api-access-8gc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.784684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts" (OuterVolumeSpecName: "scripts") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.796610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.797747 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.810761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.815226 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data" (OuterVolumeSpecName: "config-data") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876744 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876778 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876789 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876801 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.966836 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.975857 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerDied","Data":"81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69"} Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221333 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221398 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.332365 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:25 crc kubenswrapper[4907]: E0127 18:30:25.332879 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.332901 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.333181 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.333924 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.336360 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mp6sz" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.336452 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.351318 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390441 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390858 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.492943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.493326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.493392 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.500219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.504206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.517270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.527616 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.649760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.713483 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:25 crc kubenswrapper[4907]: W0127 18:30:25.723616 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb430d70c_f51d_4ffd_856f_4035b5d053b7.slice/crio-44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa WatchSource:0}: Error finding container 44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa: Status 404 returned error can't find the container with id 44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235217 4907 generic.go:334] "Generic (PLEG): container finished" podID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerID="bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb" exitCode=0 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235683 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerDied","Data":"bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235715 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerStarted","Data":"c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.237803 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238874 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" exitCode=137 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238947 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238959 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.241414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerStarted","Data":"8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.241454 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerStarted","Data":"44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa"} Jan 27 18:30:26 crc kubenswrapper[4907]: W0127 18:30:26.327392 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7fd860_ac95_4571_99c5_b416f9a9bae9.slice/crio-9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3 WatchSource:0}: Error finding container 9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3: Status 404 returned error can't find the container with id 9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.331295 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421296 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421365 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421416 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421502 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421679 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.425515 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.425597 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.428816 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts" (OuterVolumeSpecName: "scripts") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.432338 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq" (OuterVolumeSpecName: "kube-api-access-mqqhq") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "kube-api-access-mqqhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.465388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.525959 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.525995 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526011 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526022 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526033 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.543734 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.575401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data" (OuterVolumeSpecName: "config-data") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.627613 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.627641 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a7fd860-ac95-4571-99c5-b416f9a9bae9","Type":"ContainerStarted","Data":"7904bdbf0ee42c22d0f04ac3efc61e09adf2803e876d6362e758a18e6af589b8"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a7fd860-ac95-4571-99c5-b416f9a9bae9","Type":"ContainerStarted","Data":"9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.261924 4907 generic.go:334] "Generic (PLEG): container finished" podID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerID="8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c" exitCode=0 Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.262301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerDied","Data":"8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.262410 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.280206 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.280186985 podStartE2EDuration="2.280186985s" podCreationTimestamp="2026-01-27 18:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:27.277007135 +0000 UTC m=+1482.406289757" watchObservedRunningTime="2026-01-27 18:30:27.280186985 +0000 UTC m=+1482.409469597" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.323111 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.350621 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.365549 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366374 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366422 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366450 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366460 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366506 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366516 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366577 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366927 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366954 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.367004 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.367032 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.370264 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.376803 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.377033 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.383957 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445709 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445802 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445830 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445906 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.446097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548185 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548277 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548302 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548334 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548393 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.552211 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.552279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.554698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.557143 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.557507 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.568735 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.588357 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.700784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.797146 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" path="/var/lib/kubelet/pods/9edfaf34-4000-4bda-9c1f-0f4afa06325b/volumes" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.970643 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.979075 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067157 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067368 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067397 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"b430d70c-f51d-4ffd-856f-4035b5d053b7\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067423 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"b430d70c-f51d-4ffd-856f-4035b5d053b7\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067963 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cabef78-d5b3-4e61-9aa1-0f0529701fa0" (UID: "3cabef78-d5b3-4e61-9aa1-0f0529701fa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067981 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b430d70c-f51d-4ffd-856f-4035b5d053b7" (UID: "b430d70c-f51d-4ffd-856f-4035b5d053b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.072982 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65" (OuterVolumeSpecName: "kube-api-access-jsq65") pod "3cabef78-d5b3-4e61-9aa1-0f0529701fa0" (UID: "3cabef78-d5b3-4e61-9aa1-0f0529701fa0"). InnerVolumeSpecName "kube-api-access-jsq65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.078324 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4" (OuterVolumeSpecName: "kube-api-access-xztb4") pod "b430d70c-f51d-4ffd-856f-4035b5d053b7" (UID: "b430d70c-f51d-4ffd-856f-4035b5d053b7"). InnerVolumeSpecName "kube-api-access-xztb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169906 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169942 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169955 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169964 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.232888 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:28 crc kubenswrapper[4907]: W0127 18:30:28.237601 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b WatchSource:0}: Error finding container e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b: Status 404 returned error can't find the container with id e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.274788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerDied","Data":"44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.275945 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.274837 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.276808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerDied","Data":"c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278709 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.654707 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.703154 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.893667 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.291531 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7"} Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.924372 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:29 crc kubenswrapper[4907]: E0127 18:30:29.925290 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: E0127 18:30:29.925360 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925368 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925647 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.926536 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.928868 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.930744 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.930999 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.931372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.937781 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.049919 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.049990 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.050032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.050169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153427 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.158388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.166111 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.174169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.180267 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.193757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.247633 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.314746 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" containerID="cri-o://317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" gracePeriod=2 Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.315052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f"} Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.846821 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.076405 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.192949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193108 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193175 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193813 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities" (OuterVolumeSpecName: "utilities") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.201210 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj" (OuterVolumeSpecName: "kube-api-access-2xjmj") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "kube-api-access-2xjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.295759 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.295793 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.320103 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330307 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" exitCode=0 Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"02a7ef787ad2af55aee76003ed5f2c734d79246a8b02f7fc6a11cdc00fcff410"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330401 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330490 4907 scope.go:117] "RemoveContainer" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.338075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerStarted","Data":"75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.340811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.367619 4907 scope.go:117] "RemoveContainer" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.374529 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.387947 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.398072 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.402229 4907 scope.go:117] "RemoveContainer" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431195 4907 scope.go:117] "RemoveContainer" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.431791 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": container with ID starting with 317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db not found: ID does not exist" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431826 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} err="failed to get container status \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": rpc error: code = NotFound desc = could not find container \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": container with ID starting with 317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431848 4907 scope.go:117] "RemoveContainer" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.432427 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": container with ID starting with c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b not found: ID does not exist" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432456 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} err="failed to get container status \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": rpc error: code = NotFound desc = could not find container \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": container with ID starting with c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432476 4907 scope.go:117] "RemoveContainer" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.432764 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": container with ID starting with 42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c not found: ID does not exist" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432830 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c"} err="failed to get container status \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": rpc error: code = NotFound desc = could not find container \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": container with ID starting with 42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.767319 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a7503d-bec1-4b22-a132-abaa924af073" path="/var/lib/kubelet/pods/32a7503d-bec1-4b22-a132-abaa924af073/volumes" Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.356191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021"} Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.357190 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.381979 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.746838652 podStartE2EDuration="5.381960755s" podCreationTimestamp="2026-01-27 18:30:27 +0000 UTC" firstStartedPulling="2026-01-27 18:30:28.241904226 +0000 UTC m=+1483.371186838" lastFinishedPulling="2026-01-27 18:30:31.877026329 +0000 UTC m=+1487.006308941" observedRunningTime="2026-01-27 18:30:32.376547971 +0000 UTC m=+1487.505830593" watchObservedRunningTime="2026-01-27 18:30:32.381960755 +0000 UTC m=+1487.511243367" Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.398740 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerStarted","Data":"7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3"} Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.427804 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-lvm8r" podStartSLOduration=2.307887749 podStartE2EDuration="6.427780047s" podCreationTimestamp="2026-01-27 18:30:29 +0000 UTC" firstStartedPulling="2026-01-27 18:30:30.847533889 +0000 UTC m=+1485.976816511" lastFinishedPulling="2026-01-27 18:30:34.967426197 +0000 UTC m=+1490.096708809" observedRunningTime="2026-01-27 18:30:35.419512323 +0000 UTC m=+1490.548794935" watchObservedRunningTime="2026-01-27 18:30:35.427780047 +0000 UTC m=+1490.557062669" Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.700780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.320874 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334359 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-content" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334405 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-content" Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334458 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334468 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334515 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-utilities" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334527 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-utilities" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.335090 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.339791 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.352135 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.353008 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.353039 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.423974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424026 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424160 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529435 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529757 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529789 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.539129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.545363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.557529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.581232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.601966 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.603760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.615005 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.647628 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.679745 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.710937 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.712909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.719376 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.736967 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.737190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.739503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.748918 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.801481 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.803312 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.811401 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.829729 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.841906 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.841947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842005 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842049 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842260 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.846928 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.863176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.875805 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.889582 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.890029 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.899152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.909959 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.910077 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.912594 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.922275 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944234 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944288 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944411 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944446 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944485 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944526 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944594 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944651 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944724 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944757 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.953651 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.961461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.974824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.982578 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056206 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056610 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056659 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.058653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059732 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061752 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.063991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.064264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.064332 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.065206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.065254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.066182 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.066744 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.068705 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.076602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.080425 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.088619 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.093494 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.106244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.110896 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.116967 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.125985 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.159116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.415895 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.438670 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.963618 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.985402 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.226824 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.230459 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.246294 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.282193 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.287023 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:38 crc kubenswrapper[4907]: W0127 18:30:38.293925 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice/crio-d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851 WatchSource:0}: Error finding container d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851: Status 404 returned error can't find the container with id d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851 Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.337157 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361959 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.362120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.375824 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.465925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466002 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.476230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.476642 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.481073 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.487934 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.497790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerStarted","Data":"d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.501405 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerStarted","Data":"92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.505707 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.514022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerStarted","Data":"1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.514058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerStarted","Data":"339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.518900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.555997 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-749bg" podStartSLOduration=2.555974745 podStartE2EDuration="2.555974745s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:38.531975497 +0000 UTC m=+1493.661258109" watchObservedRunningTime="2026-01-27 18:30:38.555974745 +0000 UTC m=+1493.685257347" Jan 27 18:30:38 crc kubenswrapper[4907]: W0127 18:30:38.652505 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a WatchSource:0}: Error finding container 006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a: Status 404 returned error can't find the container with id 006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.662630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.687524 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.377609 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.556875 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerStarted","Data":"006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.564686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerStarted","Data":"53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.572698 4907 generic.go:334] "Generic (PLEG): container finished" podID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerID="c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.572770 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.609597 4907 generic.go:334] "Generic (PLEG): container finished" podID="c16f7a68-05a6-494f-94ce-1774118b0592" containerID="7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.609668 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerDied","Data":"7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.875509 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.912267 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.638735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerStarted","Data":"1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3"} Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.643410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerStarted","Data":"1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e"} Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.687984 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" podStartSLOduration=2.687959369 podStartE2EDuration="2.687959369s" podCreationTimestamp="2026-01-27 18:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:40.654906775 +0000 UTC m=+1495.784189387" watchObservedRunningTime="2026-01-27 18:30:40.687959369 +0000 UTC m=+1495.817241981" Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.703933 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podStartSLOduration=4.70391004 podStartE2EDuration="4.70391004s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:40.68549661 +0000 UTC m=+1495.814779222" watchObservedRunningTime="2026-01-27 18:30:40.70391004 +0000 UTC m=+1495.833192652" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.236977 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354474 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354615 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354636 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.367777 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g" (OuterVolumeSpecName: "kube-api-access-7928g") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "kube-api-access-7928g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.377348 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts" (OuterVolumeSpecName: "scripts") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.403021 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.403481 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data" (OuterVolumeSpecName: "config-data") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458423 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458451 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458461 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458473 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.680753 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.683613 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerDied","Data":"75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4"} Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.684529 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.684570 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.710191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708900 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" containerID="cri-o://d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708588 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" containerID="cri-o://19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.711254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.711334 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.715256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerStarted","Data":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.726697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerStarted","Data":"88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.726860 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.753602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.009641476 podStartE2EDuration="7.753576338s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:37.992233123 +0000 UTC m=+1493.121515735" lastFinishedPulling="2026-01-27 18:30:42.736167985 +0000 UTC m=+1497.865450597" observedRunningTime="2026-01-27 18:30:43.733210993 +0000 UTC m=+1498.862493605" watchObservedRunningTime="2026-01-27 18:30:43.753576338 +0000 UTC m=+1498.882858950" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.791333 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.749368703 podStartE2EDuration="7.791312095s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:38.669439142 +0000 UTC m=+1493.798721754" lastFinishedPulling="2026-01-27 18:30:42.711382534 +0000 UTC m=+1497.840665146" observedRunningTime="2026-01-27 18:30:43.772692699 +0000 UTC m=+1498.901975401" watchObservedRunningTime="2026-01-27 18:30:43.791312095 +0000 UTC m=+1498.920594697" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.812760 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.098280052 podStartE2EDuration="7.812743411s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:37.996712489 +0000 UTC m=+1493.125995101" lastFinishedPulling="2026-01-27 18:30:42.711175848 +0000 UTC m=+1497.840458460" observedRunningTime="2026-01-27 18:30:43.803487989 +0000 UTC m=+1498.932770601" watchObservedRunningTime="2026-01-27 18:30:43.812743411 +0000 UTC m=+1498.942026023" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.650748 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.222068772 podStartE2EDuration="8.650730343s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:38.282491236 +0000 UTC m=+1493.411773848" lastFinishedPulling="2026-01-27 18:30:42.711152807 +0000 UTC m=+1497.840435419" observedRunningTime="2026-01-27 18:30:43.834153346 +0000 UTC m=+1498.963435958" watchObservedRunningTime="2026-01-27 18:30:44.650730343 +0000 UTC m=+1499.780012955" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.652956 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:44 crc kubenswrapper[4907]: E0127 18:30:44.653477 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.653496 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.653726 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.655969 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664204 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664082 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.674797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740408 4907 generic.go:334] "Generic (PLEG): container finished" podID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerID="d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" exitCode=0 Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740436 4907 generic.go:334] "Generic (PLEG): container finished" podID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerID="19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" exitCode=143 Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740523 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2"} Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4"} Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.757892 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.860881 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.870505 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.871377 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.887526 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.890237 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.979320 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.588059 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680638 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680694 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680938 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.681971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs" (OuterVolumeSpecName: "logs") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.687371 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6" (OuterVolumeSpecName: "kube-api-access-vt2z6") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "kube-api-access-vt2z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: E0127 18:30:45.722073 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data podName:4269e70c-a481-47cf-a9fe-7d9095cb4445 nodeName:}" failed. No retries permitted until 2026-01-27 18:30:46.22204387 +0000 UTC m=+1501.351326492 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445") : error deleting /var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volume-subpaths: remove /var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volume-subpaths: no such file or directory Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.727754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.774802 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783621 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783674 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783684 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: W0127 18:30:45.785833 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763 WatchSource:0}: Error finding container 2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763: Status 404 returned error can't find the container with id 2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763 Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7"} Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796707 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796733 4907 scope.go:117] "RemoveContainer" containerID="d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.852931 4907 scope.go:117] "RemoveContainer" containerID="19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.295567 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.308615 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data" (OuterVolumeSpecName: "config-data") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.399090 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.601343 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.654921 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.676330 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: E0127 18:30:46.678657 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.678683 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: E0127 18:30:46.678744 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.678754 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.679253 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.679300 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.686902 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.692145 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.692238 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.698480 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.708895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.709010 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710543 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710603 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.794297 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a"} Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.794347 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763"} Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816828 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.818567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.824188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.825106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.837190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.866611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.017902 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.067618 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.068951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.109301 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.127683 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.165773 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.166147 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.226163 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.226442 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" containerID="cri-o://f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" gracePeriod=10 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.420333 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.645421 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.772235 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" path="/var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volumes" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.821078 4907 generic.go:334] "Generic (PLEG): container finished" podID="719784a4-cead-4054-ac6b-e7e45118be8c" containerID="f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" exitCode=0 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.821132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.826602 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.834033 4907 generic.go:334] "Generic (PLEG): container finished" podID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerID="1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838" exitCode=0 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.835207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerDied","Data":"1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.897765 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.215136 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.247852 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.248196 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354293 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354743 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354797 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.365812 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd" (OuterVolumeSpecName: "kube-api-access-ds4bd") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "kube-api-access-ds4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.445355 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.446058 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config" (OuterVolumeSpecName: "config") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457506 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457538 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457548 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.495173 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.511098 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.524940 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.530887 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559474 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559503 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559513 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.643622 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644113 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" containerID="cri-o://b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" containerID="cri-o://160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644387 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" containerID="cri-o://15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.645395 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" containerID="cri-o://523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.654223 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": EOF" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870359 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" exitCode=0 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870390 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" exitCode=2 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882824 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882864 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882915 4907 scope.go:117] "RemoveContainer" containerID="f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.913438 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.913490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.938907 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.938890764 podStartE2EDuration="2.938890764s" podCreationTimestamp="2026-01-27 18:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:48.934619383 +0000 UTC m=+1504.063901995" watchObservedRunningTime="2026-01-27 18:30:48.938890764 +0000 UTC m=+1504.068173376" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.049729 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.075337 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.641618 4907 scope.go:117] "RemoveContainer" containerID="838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.788437 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" path="/var/lib/kubelet/pods/719784a4-cead-4054-ac6b-e7e45118be8c/volumes" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.857279 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.913641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.913991 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.914021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.914089 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.922439 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl" (OuterVolumeSpecName: "kube-api-access-lf2fl") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "kube-api-access-lf2fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.929510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts" (OuterVolumeSpecName: "scripts") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939517 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerDied","Data":"339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c"} Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939572 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939622 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.947793 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" exitCode=0 Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.948874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7"} Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.976806 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.005757 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data" (OuterVolumeSpecName: "config-data") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017395 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017439 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017453 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017468 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.964583 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.049779 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.050092 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" containerID="cri-o://fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.050187 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" containerID="cri-o://953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.062110 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.062295 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" containerID="cri-o://a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" containerID="cri-o://59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094520 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" containerID="cri-o://439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.981171 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.987951 4907 generic.go:334] "Generic (PLEG): container finished" podID="9af43216-6482-4024-a320-fa8855680d03" containerID="fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" exitCode=143 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.988027 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993147 4907 generic.go:334] "Generic (PLEG): container finished" podID="1037249a-76d6-42a0-8336-dc2d8b998362" containerID="439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" exitCode=0 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993184 4907 generic.go:334] "Generic (PLEG): container finished" podID="1037249a-76d6-42a0-8336-dc2d8b998362" containerID="59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" exitCode=143 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993237 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993249 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993259 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.018881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.018950 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.024626 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063135 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063229 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063493 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063596 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063817 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs" (OuterVolumeSpecName: "logs") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.064299 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.070863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b" (OuterVolumeSpecName: "kube-api-access-gxw4b") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "kube-api-access-gxw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.071146 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.072633 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.074464 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.074509 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.106371 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data" (OuterVolumeSpecName: "config-data") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.109731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.135446 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166521 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166590 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166603 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166615 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.675129 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: i/o timeout" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.003924 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.512877 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.532609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544158 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544890 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="init" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544961 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="init" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544981 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544989 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.545012 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545020 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.545043 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545053 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545390 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545422 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545440 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545455 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.547229 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.550369 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.555658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.557082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.610945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.612013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.613647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.613777 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.614108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716921 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.723483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.724601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.725274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.738773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.788044 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" path="/var/lib/kubelet/pods/1037249a-76d6-42a0-8336-dc2d8b998362/volumes" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.875828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:54 crc kubenswrapper[4907]: I0127 18:30:54.581371 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.031897 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"d180305b0a777de2ed578a449a3d5ecb18240c9610de3f6b48a1a28bd4f905ad"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.034521 4907 generic.go:334] "Generic (PLEG): container finished" podID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerID="1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.034590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerDied","Data":"1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.039488 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.039532 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.045759 4907 generic.go:334] "Generic (PLEG): container finished" podID="9af43216-6482-4024-a320-fa8855680d03" containerID="953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.045782 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.254862 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.349760 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.363943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364116 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364158 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364662 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs" (OuterVolumeSpecName: "logs") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.365052 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.385834 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v" (OuterVolumeSpecName: "kube-api-access-skg6v") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "kube-api-access-skg6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.445115 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.456755 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data" (OuterVolumeSpecName: "config-data") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.465821 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466433 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466618 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466728 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466842 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467474 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467543 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467636 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.468264 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.468790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.470971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth" (OuterVolumeSpecName: "kube-api-access-jbpth") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "kube-api-access-jbpth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.479843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts" (OuterVolumeSpecName: "scripts") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.536768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569585 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569627 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569638 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569652 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569662 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.603175 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.616551 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data" (OuterVolumeSpecName: "config-data") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.671803 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.672232 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.040649 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076423 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076476 4907 scope.go:117] "RemoveContainer" containerID="160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076761 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.089502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.089648 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.093796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.093878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.094056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.097329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.097779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102339 4907 generic.go:334] "Generic (PLEG): container finished" podID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" exitCode=0 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerDied","Data":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerDied","Data":"92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102506 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.104244 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl" (OuterVolumeSpecName: "kube-api-access-bw6sl") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "kube-api-access-bw6sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.113914 4907 scope.go:117] "RemoveContainer" containerID="15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123494 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" containerID="cri-o://ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123727 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" containerID="cri-o://5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123855 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" containerID="cri-o://046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.138402 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123655 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" containerID="cri-o://d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.160654 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.173893 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.174060 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data" (OuterVolumeSpecName: "config-data") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205942 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205983 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205998 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.221428 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.257631 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.278113 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.278959 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.278982 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279014 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279047 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279063 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279074 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279089 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279099 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279126 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279135 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279147 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279156 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279186 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279194 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279487 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279510 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279522 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279536 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279618 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279633 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279650 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.282399 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.284177 4907 scope.go:117] "RemoveContainer" containerID="523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.289811 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.290040 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.301151 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.304034 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.306406 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.332768 4907 scope.go:117] "RemoveContainer" containerID="b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.344587 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.356972 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.361503 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.36148479 podStartE2EDuration="3.36148479s" podCreationTimestamp="2026-01-27 18:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:56.204368289 +0000 UTC m=+1511.333650901" watchObservedRunningTime="2026-01-27 18:30:56.36148479 +0000 UTC m=+1511.490767402" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.366076 4907 scope.go:117] "RemoveContainer" containerID="953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.379127 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.434081604 podStartE2EDuration="12.379103747s" podCreationTimestamp="2026-01-27 18:30:44 +0000 UTC" firstStartedPulling="2026-01-27 18:30:45.796918266 +0000 UTC m=+1500.926200878" lastFinishedPulling="2026-01-27 18:30:54.741940409 +0000 UTC m=+1509.871223021" observedRunningTime="2026-01-27 18:30:56.237349161 +0000 UTC m=+1511.366631793" watchObservedRunningTime="2026-01-27 18:30:56.379103747 +0000 UTC m=+1511.508386379" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.395469 4907 scope.go:117] "RemoveContainer" containerID="fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410044 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410110 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410177 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410257 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410271 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.424844 4907 scope.go:117] "RemoveContainer" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.464183 4907 scope.go:117] "RemoveContainer" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.465930 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": container with ID starting with a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646 not found: ID does not exist" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.465969 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} err="failed to get container status \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": rpc error: code = NotFound desc = could not find container \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": container with ID starting with a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646 not found: ID does not exist" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.488916 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.507585 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512249 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512286 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512383 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512435 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512452 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512523 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512566 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.513458 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.514085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.514137 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.521866 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.523884 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533010 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533066 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533222 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.534277 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.537875 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.543183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.549540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.550257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.552053 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.555893 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.558263 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.558765 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614355 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.618092 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.635212 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.719334 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.724324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.725018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.738436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.738439 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.746572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.865202 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.061245 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.139483 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.139825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.140024 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.140203 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerDied","Data":"53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144514 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144821 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.148359 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v" (OuterVolumeSpecName: "kube-api-access-qgx2v") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "kube-api-access-qgx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.148409 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts" (OuterVolumeSpecName: "scripts") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158167 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158255 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158265 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158543 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158843 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.183393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.192384 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data" (OuterVolumeSpecName: "config-data") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.215313 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245056 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245089 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245104 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245117 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.382192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.660090 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.770221 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" path="/var/lib/kubelet/pods/43c0bea1-2042-4d24-81b3-bc7c93696fcb/volumes" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.772004 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" path="/var/lib/kubelet/pods/45bfe136-4245-4d16-9c68-2a21136b3b9a/volumes" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.772766 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af43216-6482-4024-a320-fa8855680d03" path="/var/lib/kubelet/pods/9af43216-6482-4024-a320-fa8855680d03/volumes" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.172941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerStarted","Data":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.172993 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerStarted","Data":"5489fcea5b6ae89e6b35c046631a54afc66829b16e074f7f6498d1c0a256c442"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180367 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180418 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"7c25ec383f00142113ac9300e8dffba737e591fb0b61aea496b9da56aff1e861"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.184610 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:58 crc kubenswrapper[4907]: E0127 18:30:58.185250 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.185285 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.185613 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.186617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"eab5edc13640e8653e78a3f8680981b7da42ad91be33abedd1cee2900fc2aa2c"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.186738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.188907 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.229181 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.249727 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.249704894 podStartE2EDuration="2.249704894s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:58.195722909 +0000 UTC m=+1513.325005521" watchObservedRunningTime="2026-01-27 18:30:58.249704894 +0000 UTC m=+1513.378987516" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.264865 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.264843762 podStartE2EDuration="2.264843762s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:58.242091269 +0000 UTC m=+1513.371373881" watchObservedRunningTime="2026-01-27 18:30:58.264843762 +0000 UTC m=+1513.394126374" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303304 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405150 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405445 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.411844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.412010 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.424241 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.511296 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.876917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.878646 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.005473 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.200592 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.202665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c","Type":"ContainerStarted","Data":"d2f11f1cbabd63614a4b596ec577ead974caed76d8f23b4d2217116ea11f12f0"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.213417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.213680 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.215259 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c","Type":"ContainerStarted","Data":"052885f519be471da5b8312edf245577ef787fd8e03dd0d2cd3b61536d415648"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.216433 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.233689 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.233672554 podStartE2EDuration="2.233672554s" podCreationTimestamp="2026-01-27 18:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:00.233011966 +0000 UTC m=+1515.362294578" watchObservedRunningTime="2026-01-27 18:31:00.233672554 +0000 UTC m=+1515.362955166" Jan 27 18:31:01 crc kubenswrapper[4907]: I0127 18:31:01.866163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.251160 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.251219 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.274957 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.344565665 podStartE2EDuration="6.274933554s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="2026-01-27 18:30:57.386743615 +0000 UTC m=+1512.516026227" lastFinishedPulling="2026-01-27 18:31:01.317111504 +0000 UTC m=+1516.446394116" observedRunningTime="2026-01-27 18:31:02.270697175 +0000 UTC m=+1517.399979807" watchObservedRunningTime="2026-01-27 18:31:02.274933554 +0000 UTC m=+1517.404216166" Jan 27 18:31:03 crc kubenswrapper[4907]: I0127 18:31:03.876125 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:03 crc kubenswrapper[4907]: I0127 18:31:03.876754 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:04 crc kubenswrapper[4907]: I0127 18:31:04.890797 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:04 crc kubenswrapper[4907]: I0127 18:31:04.890841 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.636431 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.639241 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.866493 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.904366 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.359341 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.717794 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.717867 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:08 crc kubenswrapper[4907]: I0127 18:31:08.547055 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.885023 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.887712 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.887765 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.888374 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.889674 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.890109 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.892871 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.895226 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.402143 4907 generic.go:334] "Generic (PLEG): container finished" podID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerID="88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" exitCode=137 Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.402429 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerDied","Data":"88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd"} Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.403072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerDied","Data":"006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a"} Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.403092 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.408352 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.447007 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512594 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512983 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.525229 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2" (OuterVolumeSpecName: "kube-api-access-tdwr2") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "kube-api-access-tdwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.577144 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.577464 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data" (OuterVolumeSpecName: "config-data") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617377 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617616 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617712 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.415631 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.479974 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.501741 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.528923 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: E0127 18:31:15.529759 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.529783 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.530252 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.531811 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.538511 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.538833 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.539103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.551658 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638163 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638260 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638401 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740571 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740762 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.745937 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746265 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746786 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.758077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.782931 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" path="/var/lib/kubelet/pods/d1efa2c3-6982-45b0-830c-043caf2979ba/volumes" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.856135 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.313079 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.430927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3257c75e-f45f-4166-b7ba-66c1990ac2dc","Type":"ContainerStarted","Data":"1f6e88115e10ab374ec003fe7379ef64dabf861060c19f5c5094b89c8f7e99d3"} Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.640227 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.640758 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.643032 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.646393 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: E0127 18:31:16.757424 4907 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59d80fe13631613822e66e4572059c7d9ff67833c45668bac89afec0e92d4169/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59d80fe13631613822e66e4572059c7d9ff67833c45668bac89afec0e92d4169/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-7d978555f9-dwq2p_719784a4-cead-4054-ac6b-e7e45118be8c/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-7d978555f9-dwq2p_719784a4-cead-4054-ac6b-e7e45118be8c/dnsmasq-dns/0.log: no such file or directory Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.442093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3257c75e-f45f-4166-b7ba-66c1990ac2dc","Type":"ContainerStarted","Data":"65ed21d8e8e267517f529987aee51b72132330ab418de56f787691031d60a43f"} Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.442628 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.454494 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.470996 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.470978829 podStartE2EDuration="2.470978829s" podCreationTimestamp="2026-01-27 18:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:17.461331447 +0000 UTC m=+1532.590614059" watchObservedRunningTime="2026-01-27 18:31:17.470978829 +0000 UTC m=+1532.600261441" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.650661 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.653064 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.685734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794695 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794740 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794847 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896637 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896952 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897026 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897678 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897857 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.898461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.899016 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.916200 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.983350 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:18 crc kubenswrapper[4907]: I0127 18:31:18.577826 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466203 4907 generic.go:334] "Generic (PLEG): container finished" podID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerID="f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c" exitCode=0 Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c"} Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466656 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerStarted","Data":"4bd8eb5f48ea3f38d33f0dd542b84168a28c90547f3c08b18a3dbbf20455e507"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.198394 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199256 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" containerID="cri-o://782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199329 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" containerID="cri-o://7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199371 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" containerID="cri-o://bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199428 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" containerID="cri-o://171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.209017 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": EOF" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.383449 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.482854 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" exitCode=0 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.482903 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" exitCode=2 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.483005 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.483061 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.487337 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" containerID="cri-o://e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.487498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerStarted","Data":"5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.488097 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" containerID="cri-o://0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.488471 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.510705 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" podStartSLOduration=3.510687437 podStartE2EDuration="3.510687437s" podCreationTimestamp="2026-01-27 18:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:20.510426919 +0000 UTC m=+1535.639709551" watchObservedRunningTime="2026-01-27 18:31:20.510687437 +0000 UTC m=+1535.639970049" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.856634 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.391621 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493257 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493408 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493461 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493619 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493668 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493793 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.494542 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.494626 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.499340 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts" (OuterVolumeSpecName: "scripts") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.503791 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb" (OuterVolumeSpecName: "kube-api-access-wvdqb") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "kube-api-access-wvdqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510515 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" exitCode=0 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510590 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" exitCode=0 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510585 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510621 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"eab5edc13640e8653e78a3f8680981b7da42ad91be33abedd1cee2900fc2aa2c"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510688 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.514006 4907 generic.go:334] "Generic (PLEG): container finished" podID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" exitCode=143 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.514711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.551497 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597273 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597311 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597324 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.640739 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.678973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data" (OuterVolumeSpecName: "config-data") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.699892 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.699972 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.798217 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.835333 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.855341 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.867160 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.868373 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.893781 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894406 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894478 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894535 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894602 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894667 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894740 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894835 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894888 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895205 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895291 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895353 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895406 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.897516 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.900897 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.901175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.919314 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.919860 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.920816 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.920856 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} err="failed to get container status \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.920882 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.921368 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.921396 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} err="failed to get container status \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.921412 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.925675 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.925714 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} err="failed to get container status \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.925739 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.926293 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926419 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} err="failed to get container status \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926462 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926795 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} err="failed to get container status \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926822 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.927800 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} err="failed to get container status \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.927826 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.928785 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} err="failed to get container status \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.928809 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.929971 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} err="failed to get container status \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007042 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007221 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007288 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109296 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109450 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109481 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.110135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.110302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113949 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.114646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.127509 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.230218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.421218 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.724722 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.545941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6"} Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.546227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"920949198b04d9f03e2ee16b588c61df0a8878a810066103d538193d6f3eeab6"} Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.780920 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" path="/var/lib/kubelet/pods/5783d733-2312-484a-8fbe-7ea19d454c1a/volumes" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.193194 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.272628 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs" (OuterVolumeSpecName: "logs") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273174 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.274115 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.279936 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f" (OuterVolumeSpecName: "kube-api-access-6zs8f") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "kube-api-access-6zs8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.305977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.322726 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data" (OuterVolumeSpecName: "config-data") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376592 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376626 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376637 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.558442 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560097 4907 generic.go:334] "Generic (PLEG): container finished" podID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" exitCode=0 Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560167 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560192 4907 scope.go:117] "RemoveContainer" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"7c25ec383f00142113ac9300e8dffba737e591fb0b61aea496b9da56aff1e861"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.585943 4907 scope.go:117] "RemoveContainer" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.605700 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612100 4907 scope.go:117] "RemoveContainer" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.612659 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": container with ID starting with 0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c not found: ID does not exist" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612691 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} err="failed to get container status \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": rpc error: code = NotFound desc = could not find container \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": container with ID starting with 0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c not found: ID does not exist" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612716 4907 scope.go:117] "RemoveContainer" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.616049 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": container with ID starting with e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df not found: ID does not exist" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.616099 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} err="failed to get container status \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": rpc error: code = NotFound desc = could not find container \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": container with ID starting with e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df not found: ID does not exist" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.623281 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.637779 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.638604 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.638628 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.638681 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.638688 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.639048 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.639075 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.641018 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.648682 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.648963 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.649178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.653610 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786614 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786849 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.787086 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.787578 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.890660 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891121 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891193 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891232 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891325 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.895105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.896646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.897039 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.897093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.900714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.918147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.962373 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.574620 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415"} Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.589169 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.779507 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" path="/var/lib/kubelet/pods/3416074e-eff1-48c8-af01-f9dbc6c77a0e/volumes" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.857831 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.884056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.204386 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.206274 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.208097 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.208158 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.213905 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.213960 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.214034 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5783d733_2312_484a_8fbe_7ea19d454c1a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5783d733_2312_484a_8fbe_7ea19d454c1a.slice: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.218793 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3416074e_eff1_48c8_af01_f9dbc6c77a0e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3416074e_eff1_48c8_af01_f9dbc6c77a0e.slice: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.485672 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-conmon-1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-conmon-a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-conmon-fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-conmon-1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-conmon-f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-conmon-953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-conmon-88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.521192 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.521579 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590785 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590825 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590835 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"e4d6a13adca6c0e1bb31637f5bdeeb280fdc3bcb042efa87415ac3e6115add8f"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.598883 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" exitCode=137 Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.599003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.620755 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.624964 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.624948077 podStartE2EDuration="2.624948077s" podCreationTimestamp="2026-01-27 18:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:26.609802849 +0000 UTC m=+1541.739085481" watchObservedRunningTime="2026-01-27 18:31:26.624948077 +0000 UTC m=+1541.754230689" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.627178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772676 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772724 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772824 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772841 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.787832 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts" (OuterVolumeSpecName: "scripts") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.800077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk" (OuterVolumeSpecName: "kube-api-access-whppk") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "kube-api-access-whppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.875599 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.876000 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.877535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878111 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878133 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878151 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878159 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878180 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878188 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878249 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878257 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.880982 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881035 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881052 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881089 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.884321 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.900903 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.901084 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.901572 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.973607 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.980965 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981111 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981385 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.982114 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.003331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data" (OuterVolumeSpecName: "config-data") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084381 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084450 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084691 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.088522 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.089165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.089835 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.101987 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.120095 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.614889 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763"} Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.615263 4907 scope.go:117] "RemoveContainer" containerID="d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.614939 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516"} Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619543 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" containerID="cri-o://8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619691 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" containerID="cri-o://23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619761 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" containerID="cri-o://cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619808 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" containerID="cri-o://d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.672530 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.934195821 podStartE2EDuration="6.672508462s" podCreationTimestamp="2026-01-27 18:31:21 +0000 UTC" firstStartedPulling="2026-01-27 18:31:22.71942468 +0000 UTC m=+1537.848707292" lastFinishedPulling="2026-01-27 18:31:26.457737331 +0000 UTC m=+1541.587019933" observedRunningTime="2026-01-27 18:31:27.651868398 +0000 UTC m=+1542.781151030" watchObservedRunningTime="2026-01-27 18:31:27.672508462 +0000 UTC m=+1542.801791074" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.685441 4907 scope.go:117] "RemoveContainer" containerID="5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.708950 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.724802 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: W0127 18:31:27.726061 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52256d78_f327_4af2_9452_0483ad62dea0.slice/crio-af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1 WatchSource:0}: Error finding container af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1: Status 404 returned error can't find the container with id af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.739994 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.759271 4907 scope.go:117] "RemoveContainer" containerID="046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.768231 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" path="/var/lib/kubelet/pods/eec3a07c-7c6e-40ed-9a0a-f1952f923616/volumes" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.769786 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.775954 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.783210 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.783401 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.784989 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.785139 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.786056 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.788898 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806184 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806222 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.807282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.807308 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909337 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909459 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909599 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.918462 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.924161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.926348 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.987205 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.063761 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.076663 4907 scope.go:117] "RemoveContainer" containerID="ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.081739 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.082037 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" containerID="cri-o://1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" gracePeriod=10 Jan 27 18:31:28 crc kubenswrapper[4907]: W0127 18:31:28.659461 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c7b40d_63e2_4fbf_a59d_44c106984d76.slice/crio-5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8 WatchSource:0}: Error finding container 5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8: Status 404 returned error can't find the container with id 5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660831 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660871 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" exitCode=2 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660888 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660978 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661343 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.688174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerStarted","Data":"9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.688224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerStarted","Data":"af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.702659 4907 generic.go:334] "Generic (PLEG): container finished" podID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerID="1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.702711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.722909 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q8zd6" podStartSLOduration=2.722890197 podStartE2EDuration="2.722890197s" podCreationTimestamp="2026-01-27 18:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:28.713861442 +0000 UTC m=+1543.843144054" watchObservedRunningTime="2026-01-27 18:31:28.722890197 +0000 UTC m=+1543.852172819" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.932012 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.047963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048004 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048285 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.103782 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn" (OuterVolumeSpecName: "kube-api-access-7r7nn") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "kube-api-access-7r7nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.138863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.157217 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.158454 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.160589 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.160748 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.187046 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config" (OuterVolumeSpecName: "config") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.198699 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.201190 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263008 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263046 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263057 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.716655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.717856 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718968 4907 scope.go:117] "RemoveContainer" containerID="1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718849 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.951654 4907 scope.go:117] "RemoveContainer" containerID="c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a" Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.739676 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" exitCode=0 Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.740042 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6"} Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.743931 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff"} Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.743965 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.242265 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338260 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338523 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338579 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338712 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338776 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.340033 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.340772 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.345412 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq" (OuterVolumeSpecName: "kube-api-access-6h8hq") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "kube-api-access-6h8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.345738 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts" (OuterVolumeSpecName: "scripts") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.383949 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.439548 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: W0127 18:31:31.440588 4907 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0/volumes/kubernetes.io~secret/combined-ca-bundle Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440985 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441007 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441016 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441026 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441035 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441043 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.467050 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data" (OuterVolumeSpecName: "config-data") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.544034 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.761271 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"920949198b04d9f03e2ee16b588c61df0a8878a810066103d538193d6f3eeab6"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775487 4907 scope.go:117] "RemoveContainer" containerID="23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.819592 4907 scope.go:117] "RemoveContainer" containerID="cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.831519 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.433306303 podStartE2EDuration="4.831479211s" podCreationTimestamp="2026-01-27 18:31:27 +0000 UTC" firstStartedPulling="2026-01-27 18:31:28.669743325 +0000 UTC m=+1543.799025937" lastFinishedPulling="2026-01-27 18:31:31.067916233 +0000 UTC m=+1546.197198845" observedRunningTime="2026-01-27 18:31:31.792434958 +0000 UTC m=+1546.921717570" watchObservedRunningTime="2026-01-27 18:31:31.831479211 +0000 UTC m=+1546.960761833" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.884017 4907 scope.go:117] "RemoveContainer" containerID="d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.888430 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.908473 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.921309 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922086 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922104 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922137 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922146 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922174 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="init" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922182 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="init" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922227 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922235 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922252 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922261 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922288 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922600 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922633 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922648 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922658 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.925283 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.929160 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.929960 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.944877 4907 scope.go:117] "RemoveContainer" containerID="8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.947033 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.056687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057294 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057448 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159496 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159718 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.160181 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.160437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.165930 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.166523 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.166951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.171807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.182681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.258165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.780129 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.764998 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" path="/var/lib/kubelet/pods/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0/volumes" Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.804842 4907 generic.go:334] "Generic (PLEG): container finished" podID="52256d78-f327-4af2-9452-0483ad62dea0" containerID="9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b" exitCode=0 Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.804912 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerDied","Data":"9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b"} Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.809590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.809639 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"f3337d626f9eda7e83ce1fcded55327b44307f257a95be718e85ae9bc6d36459"} Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.844650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.962644 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.963072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.370263 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439013 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439201 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439325 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.456731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts" (OuterVolumeSpecName: "scripts") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.456939 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq" (OuterVolumeSpecName: "kube-api-access-l2fzq") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "kube-api-access-l2fzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.473437 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.490509 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data" (OuterVolumeSpecName: "config-data") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542320 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542380 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542397 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542408 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerDied","Data":"af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1"} Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858763 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.861205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.011537 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.012216 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" containerID="cri-o://85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.013442 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" containerID="cri-o://2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.022151 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.022163 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.035119 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.039851 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" containerID="cri-o://0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.065333 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.065584 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" containerID="cri-o://047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.066098 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" containerID="cri-o://7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.869092 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.872625 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.874119 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.874290 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.879111 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.879679 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.882880 4907 generic.go:334] "Generic (PLEG): container finished" podID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" exitCode=143 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.882953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.885911 4907 generic.go:334] "Generic (PLEG): container finished" podID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" exitCode=143 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.885952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.911151 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.387380414 podStartE2EDuration="5.911126581s" podCreationTimestamp="2026-01-27 18:31:31 +0000 UTC" firstStartedPulling="2026-01-27 18:31:32.783163408 +0000 UTC m=+1547.912446020" lastFinishedPulling="2026-01-27 18:31:36.306909575 +0000 UTC m=+1551.436192187" observedRunningTime="2026-01-27 18:31:36.906691376 +0000 UTC m=+1552.035973998" watchObservedRunningTime="2026-01-27 18:31:36.911126581 +0000 UTC m=+1552.040409193" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.226466 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:44042->10.217.0.251:8775: read: connection reset by peer" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.226489 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:44052->10.217.0.251:8775: read: connection reset by peer" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.776380 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855774 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855960 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.856056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.856112 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.858928 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs" (OuterVolumeSpecName: "logs") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.867893 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm" (OuterVolumeSpecName: "kube-api-access-959bm") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "kube-api-access-959bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.892913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.930200 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data" (OuterVolumeSpecName: "config-data") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932661 4907 generic.go:334] "Generic (PLEG): container finished" podID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" exitCode=0 Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932710 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"d180305b0a777de2ed578a449a3d5ecb18240c9610de3f6b48a1a28bd4f905ad"} Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932750 4907 scope.go:117] "RemoveContainer" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932874 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.941084 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959207 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959247 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959261 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959272 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959282 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.074898 4907 scope.go:117] "RemoveContainer" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.186014 4907 scope.go:117] "RemoveContainer" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.192709 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": container with ID starting with 7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af not found: ID does not exist" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.192757 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} err="failed to get container status \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": rpc error: code = NotFound desc = could not find container \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": container with ID starting with 7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af not found: ID does not exist" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.192789 4907 scope.go:117] "RemoveContainer" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.193361 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": container with ID starting with 047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177 not found: ID does not exist" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.193396 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} err="failed to get container status \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": rpc error: code = NotFound desc = could not find container \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": container with ID starting with 047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177 not found: ID does not exist" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.274954 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.290009 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311110 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311691 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311714 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311741 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311750 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311781 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311788 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312031 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312054 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312088 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.313729 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.324204 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.324399 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.341501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386446 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386574 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386737 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.387052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490012 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490798 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.508570 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.511807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.517428 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.517805 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.678449 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.195159 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:41 crc kubenswrapper[4907]: W0127 18:31:41.199811 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e3cb7a2_d4f9_43bf_a1e5_6486f796f9a7.slice/crio-93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541 WatchSource:0}: Error finding container 93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541: Status 404 returned error can't find the container with id 93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.670623 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726281 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726465 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.739115 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz" (OuterVolumeSpecName: "kube-api-access-fv2nz") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "kube-api-access-fv2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.767516 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" path="/var/lib/kubelet/pods/d5aaba60-3b03-4a67-8862-7def0fe6f9d9/volumes" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.794167 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data" (OuterVolumeSpecName: "config-data") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.800638 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829298 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829337 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829347 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.948348 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964350 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" exitCode=0 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964467 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerDied","Data":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964530 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerDied","Data":"5489fcea5b6ae89e6b35c046631a54afc66829b16e074f7f6498d1c0a256c442"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964542 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964569 4907 scope.go:117] "RemoveContainer" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"1a4efe9c57671bc8a25a10a2ae69e4708cda26001d8d84203db8d766f1922bd5"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969053 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"69957ade714649bbef7c1477912eeef67da9190958c15fef866a608cd0ea10c2"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978089 4907 generic.go:334] "Generic (PLEG): container finished" podID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" exitCode=0 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978152 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978189 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"e4d6a13adca6c0e1bb31637f5bdeeb280fdc3bcb042efa87415ac3e6115add8f"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978240 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.999793 4907 scope.go:117] "RemoveContainer" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.000306 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": container with ID starting with 0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708 not found: ID does not exist" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.000346 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} err="failed to get container status \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": rpc error: code = NotFound desc = could not find container \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": container with ID starting with 0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708 not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.000374 4907 scope.go:117] "RemoveContainer" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.022765 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.022747094 podStartE2EDuration="2.022747094s" podCreationTimestamp="2026-01-27 18:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:42.017202047 +0000 UTC m=+1557.146484659" watchObservedRunningTime="2026-01-27 18:31:42.022747094 +0000 UTC m=+1557.152029706" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033573 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033646 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033929 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033992 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.034267 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs" (OuterVolumeSpecName: "logs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.038380 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.047743 4907 scope.go:117] "RemoveContainer" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.051836 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g" (OuterVolumeSpecName: "kube-api-access-jqn6g") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "kube-api-access-jqn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.083736 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.109320 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data" (OuterVolumeSpecName: "config-data") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.109624 4907 scope.go:117] "RemoveContainer" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.110990 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": container with ID starting with 85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1 not found: ID does not exist" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111034 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} err="failed to get container status \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": rpc error: code = NotFound desc = could not find container \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": container with ID starting with 85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1 not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111062 4907 scope.go:117] "RemoveContainer" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.111469 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": container with ID starting with 2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee not found: ID does not exist" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111492 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} err="failed to get container status \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": rpc error: code = NotFound desc = could not find container \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": container with ID starting with 2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.113944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.123901 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141812 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141916 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.158414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159289 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159354 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159362 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159389 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159396 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159762 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159780 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159801 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.161289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.163725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.173501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.175836 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.177969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244106 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244423 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244650 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.245287 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.245305 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.314135 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.329292 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.339629 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.341814 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348037 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348226 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348315 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.351911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.352649 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355193 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355327 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.357085 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.375089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.450647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.450699 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451853 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451992 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.493419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.554809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555218 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555488 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555675 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.556213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.559316 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.560594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.562270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.566519 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.574540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.746889 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.039145 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.245198 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:43 crc kubenswrapper[4907]: W0127 18:31:43.246112 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaafbf219_964f_4436_964e_7ad85e0eb56b.slice/crio-87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf WatchSource:0}: Error finding container 87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf: Status 404 returned error can't find the container with id 87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.768498 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" path="/var/lib/kubelet/pods/55c85ba4-73c9-4126-8e07-9795c0cac323/volumes" Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.771226 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" path="/var/lib/kubelet/pods/aa954535-3c88-43d5-ba61-2cc0c9c6690f/volumes" Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"0fef0bc5290e7ec8b0c52cf6f1b31c87680e6f9bfa2ea0549a9e48e13d3a07ab"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012776 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"0c3f8522c97a29de9331926b2f53fe83a52df5413bbf670ee81fa26653dca301"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012795 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.016233 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a","Type":"ContainerStarted","Data":"64bf0ca613eef128a86b6db3405f955e3d83c371fae524e00fbf47188789f929"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.016283 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a","Type":"ContainerStarted","Data":"01901d4f80469b3bf5d5105ef6a7ab23158667a4011526df2930978c249e7c05"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.060511 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.060460113 podStartE2EDuration="2.060460113s" podCreationTimestamp="2026-01-27 18:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:44.033613435 +0000 UTC m=+1559.162896057" watchObservedRunningTime="2026-01-27 18:31:44.060460113 +0000 UTC m=+1559.189742725" Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.065999 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.06597856 podStartE2EDuration="2.06597856s" podCreationTimestamp="2026-01-27 18:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:44.051609493 +0000 UTC m=+1559.180892105" watchObservedRunningTime="2026-01-27 18:31:44.06597856 +0000 UTC m=+1559.195261172" Jan 27 18:31:45 crc kubenswrapper[4907]: I0127 18:31:45.679425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:31:45 crc kubenswrapper[4907]: I0127 18:31:45.679472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:31:47 crc kubenswrapper[4907]: I0127 18:31:47.495062 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:31:50 crc kubenswrapper[4907]: I0127 18:31:50.679574 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:50 crc kubenswrapper[4907]: I0127 18:31:50.680200 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:51 crc kubenswrapper[4907]: I0127 18:31:51.694705 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:51 crc kubenswrapper[4907]: I0127 18:31:51.694817 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.495415 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.542462 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.747937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.747987 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.161548 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.762773 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafbf219-964f-4436-964e-7ad85e0eb56b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.762831 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafbf219-964f-4436-964e-7ad85e0eb56b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.523771 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.524113 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.524170 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.525105 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.525155 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" gracePeriod=600 Jan 27 18:31:56 crc kubenswrapper[4907]: E0127 18:31:56.646182 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174075 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" exitCode=0 Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174176 4907 scope.go:117] "RemoveContainer" containerID="09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.175062 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:31:57 crc kubenswrapper[4907]: E0127 18:31:57.175422 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:31:59 crc kubenswrapper[4907]: I0127 18:31:59.920459 4907 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice" Jan 27 18:31:59 crc kubenswrapper[4907]: E0127 18:31:59.920866 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.208449 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.254214 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.270847 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.686047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.689384 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.696652 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:32:01 crc kubenswrapper[4907]: I0127 18:32:01.225674 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:32:01 crc kubenswrapper[4907]: I0127 18:32:01.761743 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" path="/var/lib/kubelet/pods/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6/volumes" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.270394 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756070 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756128 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756936 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.757460 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.765297 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.765342 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.315224 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.317679 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" containerID="cri-o://b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" gracePeriod=30 Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.434994 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.435235 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" containerID="cri-o://42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" gracePeriod=30 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.062385 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.069818 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173582 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"f00f131e-56a8-4fae-a498-798713d2159f\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.180474 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr" (OuterVolumeSpecName: "kube-api-access-vczdr") pod "f00f131e-56a8-4fae-a498-798713d2159f" (UID: "f00f131e-56a8-4fae-a498-798713d2159f"). InnerVolumeSpecName "kube-api-access-vczdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.183057 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc" (OuterVolumeSpecName: "kube-api-access-98fhc") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "kube-api-access-98fhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.208493 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.240982 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data" (OuterVolumeSpecName: "config-data") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277324 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277357 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277369 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277377 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318557 4907 generic.go:334] "Generic (PLEG): container finished" podID="a0791214-d591-446c-a64a-e1e0f237392e" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" exitCode=2 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318632 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318671 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerDied","Data":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerDied","Data":"77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318758 4907 scope.go:117] "RemoveContainer" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321333 4907 generic.go:334] "Generic (PLEG): container finished" podID="f00f131e-56a8-4fae-a498-798713d2159f" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" exitCode=2 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321364 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerDied","Data":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321384 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerDied","Data":"3673b3443d4ba1d7f90e11d19590b6b725d3fd74d821289ba3dea4614690e212"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321448 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.367702 4907 scope.go:117] "RemoveContainer" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.368252 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": container with ID starting with 42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129 not found: ID does not exist" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.368276 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} err="failed to get container status \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": rpc error: code = NotFound desc = could not find container \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": container with ID starting with 42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129 not found: ID does not exist" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.368304 4907 scope.go:117] "RemoveContainer" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.384967 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.421799 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.439874 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.452511 4907 scope.go:117] "RemoveContainer" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.453070 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": container with ID starting with b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13 not found: ID does not exist" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.453108 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} err="failed to get container status \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": rpc error: code = NotFound desc = could not find container \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": container with ID starting with b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13 not found: ID does not exist" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.456676 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.457312 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457341 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.457371 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457380 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457673 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457710 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.458693 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.462080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.465730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.478647 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.499084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.536470 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.540927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.544083 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.544408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.565062 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.584978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585348 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585678 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585844 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.687849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.687927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688069 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688130 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.693117 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.693793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.696344 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.710226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.748195 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.748608 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790754 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.797015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.801572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.805147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.815441 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.818235 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.877876 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.382194 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.398462 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576277 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" containerID="cri-o://e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576319 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" containerID="cri-o://1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576387 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" containerID="cri-o://d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576427 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" containerID="cri-o://e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.762663 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0791214-d591-446c-a64a-e1e0f237392e" path="/var/lib/kubelet/pods/a0791214-d591-446c-a64a-e1e0f237392e/volumes" Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.763399 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00f131e-56a8-4fae-a498-798713d2159f" path="/var/lib/kubelet/pods/f00f131e-56a8-4fae-a498-798713d2159f/volumes" Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.355608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"611de5af-e33a-4aca-88c7-201f7c0e6cf9","Type":"ContainerStarted","Data":"91a848f3efb02c072ae558b808c261391451a03401d7d4ed16a718774fe79122"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367766 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" exitCode=0 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367797 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" exitCode=2 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367806 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" exitCode=0 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367860 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.376855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbdf1e9-d0d7-458d-8f5a-891ee37d7483","Type":"ContainerStarted","Data":"87483b1d68dc87cf54e16981be8688b158dadb7daaef41a7fa9ac3faca7758a1"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.376896 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbdf1e9-d0d7-458d-8f5a-891ee37d7483","Type":"ContainerStarted","Data":"5cc4b3eb790d0aac57b8db1df10e3659f7748f967443434090bec45b5fad5faf"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.377056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.406920 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.015570129 podStartE2EDuration="2.406899419s" podCreationTimestamp="2026-01-27 18:32:08 +0000 UTC" firstStartedPulling="2026-01-27 18:32:09.393598521 +0000 UTC m=+1584.522881123" lastFinishedPulling="2026-01-27 18:32:09.784927801 +0000 UTC m=+1584.914210413" observedRunningTime="2026-01-27 18:32:10.392935174 +0000 UTC m=+1585.522217806" watchObservedRunningTime="2026-01-27 18:32:10.406899419 +0000 UTC m=+1585.536182031" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.326658 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.391774 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"611de5af-e33a-4aca-88c7-201f7c0e6cf9","Type":"ContainerStarted","Data":"7815e0a6478469eafb9c980413aea2a77a2cb76f6a019b1bac21d93d286bc8ba"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405646 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" exitCode=0 Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405692 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405744 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"f3337d626f9eda7e83ce1fcded55327b44307f257a95be718e85ae9bc6d36459"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405770 4907 scope.go:117] "RemoveContainer" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405789 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.426435 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.618464967 podStartE2EDuration="3.426413141s" podCreationTimestamp="2026-01-27 18:32:08 +0000 UTC" firstStartedPulling="2026-01-27 18:32:09.395440733 +0000 UTC m=+1584.524723345" lastFinishedPulling="2026-01-27 18:32:10.203388907 +0000 UTC m=+1585.332671519" observedRunningTime="2026-01-27 18:32:11.419541687 +0000 UTC m=+1586.548824299" watchObservedRunningTime="2026-01-27 18:32:11.426413141 +0000 UTC m=+1586.555695753" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.446912 4907 scope.go:117] "RemoveContainer" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457336 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457407 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457490 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457532 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457654 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458137 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458149 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.459002 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.459222 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.466764 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5" (OuterVolumeSpecName: "kube-api-access-mkkc5") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "kube-api-access-mkkc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.468765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts" (OuterVolumeSpecName: "scripts") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.484939 4907 scope.go:117] "RemoveContainer" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.502632 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.565874 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.566110 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.566219 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.634491 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data" (OuterVolumeSpecName: "config-data") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.644962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.667951 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.667988 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.721488 4907 scope.go:117] "RemoveContainer" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.751905 4907 scope.go:117] "RemoveContainer" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.763295 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": container with ID starting with 1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85 not found: ID does not exist" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.763364 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} err="failed to get container status \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": rpc error: code = NotFound desc = could not find container \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": container with ID starting with 1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.763402 4907 scope.go:117] "RemoveContainer" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.764044 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": container with ID starting with d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213 not found: ID does not exist" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764076 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} err="failed to get container status \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": rpc error: code = NotFound desc = could not find container \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": container with ID starting with d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764097 4907 scope.go:117] "RemoveContainer" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.764640 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": container with ID starting with e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2 not found: ID does not exist" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764682 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} err="failed to get container status \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": rpc error: code = NotFound desc = could not find container \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": container with ID starting with e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764703 4907 scope.go:117] "RemoveContainer" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.765030 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": container with ID starting with e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb not found: ID does not exist" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.765094 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} err="failed to get container status \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": rpc error: code = NotFound desc = could not find container \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": container with ID starting with e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.770618 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.790778 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.809439 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810092 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810117 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810151 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810161 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810182 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810189 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810200 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810208 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810465 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810494 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810514 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810537 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.813118 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.818108 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.818872 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.819125 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.826133 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877652 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877735 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877768 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878115 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979936 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979979 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980066 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.982124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.982292 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.983711 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.983966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.984013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.984281 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.991399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.998948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:12 crc kubenswrapper[4907]: I0127 18:32:12.139409 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:12 crc kubenswrapper[4907]: I0127 18:32:12.613758 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:13 crc kubenswrapper[4907]: I0127 18:32:13.443929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"4257897db24798af95d4509907c4c709e003d71b1674cb5e114aba590d0cac1f"} Jan 27 18:32:13 crc kubenswrapper[4907]: I0127 18:32:13.774291 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" path="/var/lib/kubelet/pods/68580f4b-e3d6-44f3-bff6-55be77887563/volumes" Jan 27 18:32:14 crc kubenswrapper[4907]: I0127 18:32:14.456743 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d"} Jan 27 18:32:15 crc kubenswrapper[4907]: I0127 18:32:15.512527 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99"} Jan 27 18:32:15 crc kubenswrapper[4907]: I0127 18:32:15.512932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267"} Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.159670 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.174677 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.244414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.246254 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.280937 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335240 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335426 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437584 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.447649 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.452806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.455838 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.560302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc"} Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.560485 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.571096 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.588463 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.950741704 podStartE2EDuration="6.588439081s" podCreationTimestamp="2026-01-27 18:32:11 +0000 UTC" firstStartedPulling="2026-01-27 18:32:12.618378269 +0000 UTC m=+1587.747660881" lastFinishedPulling="2026-01-27 18:32:16.256075646 +0000 UTC m=+1591.385358258" observedRunningTime="2026-01-27 18:32:17.581595388 +0000 UTC m=+1592.710878010" watchObservedRunningTime="2026-01-27 18:32:17.588439081 +0000 UTC m=+1592.717721693" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.787398 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" path="/var/lib/kubelet/pods/90ffb508-65d2-4c20-95db-209a1c9a3399/volumes" Jan 27 18:32:18 crc kubenswrapper[4907]: W0127 18:32:18.168227 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67fd41b_79b0_4ab4_86b6_816389597620.slice/crio-05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b WatchSource:0}: Error finding container 05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b: Status 404 returned error can't find the container with id 05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.170997 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.577498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerStarted","Data":"05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b"} Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.897516 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.582327 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.754489 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:19 crc kubenswrapper[4907]: E0127 18:32:19.754901 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.837621 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.837864 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" containerID="cri-o://0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838394 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" containerID="cri-o://7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838452 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" containerID="cri-o://b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838487 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" containerID="cri-o://57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" gracePeriod=30 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.652859 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653162 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" exitCode=2 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653175 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653183 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654396 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654464 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654496 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.834139 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.851382 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951545 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951690 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951721 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951758 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951808 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951855 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.952050 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.954376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.954617 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.976502 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts" (OuterVolumeSpecName: "scripts") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.982182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm" (OuterVolumeSpecName: "kube-api-access-m6jlm") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "kube-api-access-m6jlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.014396 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055005 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055034 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055045 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055054 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055062 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.085327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.158483 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.164397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.193081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data" (OuterVolumeSpecName: "config-data") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.261315 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.261360 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"4257897db24798af95d4509907c4c709e003d71b1674cb5e114aba590d0cac1f"} Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684395 4907 scope.go:117] "RemoveContainer" containerID="7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684398 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.786775 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.792112 4907 scope.go:117] "RemoveContainer" containerID="b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.800705 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833285 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833911 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833934 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833950 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833961 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833983 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833991 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.834013 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834020 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834223 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834241 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834260 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834279 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.836437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.844356 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.846935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.847910 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.848208 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.859250 4907 scope.go:117] "RemoveContainer" containerID="57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879147 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879580 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.893507 4907 scope.go:117] "RemoveContainer" containerID="0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981661 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981980 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982174 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982246 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.990793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.994363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.995164 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.999319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.999376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.001260 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.187192 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.778515 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:22 crc kubenswrapper[4907]: W0127 18:32:22.787324 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cc0b779_ca13_49be_91c1_ea2eb4a99d9c.slice/crio-bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe WatchSource:0}: Error finding container bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe: Status 404 returned error can't find the container with id bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe Jan 27 18:32:23 crc kubenswrapper[4907]: I0127 18:32:23.790723 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" path="/var/lib/kubelet/pods/51f8c374-8d1f-4229-a1de-d25e2bceffb8/volumes" Jan 27 18:32:23 crc kubenswrapper[4907]: I0127 18:32:23.792105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe"} Jan 27 18:32:25 crc kubenswrapper[4907]: I0127 18:32:25.070328 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" containerID="cri-o://435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" gracePeriod=604795 Jan 27 18:32:25 crc kubenswrapper[4907]: I0127 18:32:25.934070 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" containerID="cri-o://dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" gracePeriod=604795 Jan 27 18:32:29 crc kubenswrapper[4907]: I0127 18:32:29.640859 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 18:32:30 crc kubenswrapper[4907]: I0127 18:32:30.019959 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 18:32:31 crc kubenswrapper[4907]: I0127 18:32:31.878616 4907 generic.go:334] "Generic (PLEG): container finished" podID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerID="435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" exitCode=0 Jan 27 18:32:31 crc kubenswrapper[4907]: I0127 18:32:31.878712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406"} Jan 27 18:32:33 crc kubenswrapper[4907]: I0127 18:32:33.904660 4907 generic.go:334] "Generic (PLEG): container finished" podID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerID="dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" exitCode=0 Jan 27 18:32:33 crc kubenswrapper[4907]: I0127 18:32:33.904739 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8"} Jan 27 18:32:34 crc kubenswrapper[4907]: I0127 18:32:34.748239 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:34 crc kubenswrapper[4907]: E0127 18:32:34.748853 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.556245 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.560758 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.564820 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.582742 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665336 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665714 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665809 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.666011 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768420 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768609 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768639 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768736 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768762 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.769867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.770657 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.771272 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.772258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.773026 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.773717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.800616 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.920320 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.179927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.183067 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.196082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.268480 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.302770 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406704 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.407509 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408102 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408372 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408406 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408430 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408590 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408616 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408649 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408693 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408901 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408967 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409033 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409064 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409129 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.410572 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.411145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.411228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412431 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412587 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412633 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.415945 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.417599 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.422599 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.423289 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.424477 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.427104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.427473 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q" (OuterVolumeSpecName: "kube-api-access-52b6q") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "kube-api-access-52b6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.435507 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info" (OuterVolumeSpecName: "pod-info") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.437603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.439482 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.443947 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.460727 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj" (OuterVolumeSpecName: "kube-api-access-v88pj") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "kube-api-access-v88pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.475689 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info" (OuterVolumeSpecName: "pod-info") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: E0127 18:32:40.514566 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf podName:52cb02a9-7a60-4761-9770-a9b6910f1088 nodeName:}" failed. No retries permitted until 2026-01-27 18:32:41.014521051 +0000 UTC m=+1616.143803713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515371 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515397 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515407 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515416 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515425 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515433 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515443 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515451 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515458 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515466 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515473 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515482 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515492 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515499 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.535826 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data" (OuterVolumeSpecName: "config-data") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.558308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622" (OuterVolumeSpecName: "persistence") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "pvc-2544df99-ce65-431e-b41d-029cd6318622". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.607918 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf" (OuterVolumeSpecName: "server-conf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.609589 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data" (OuterVolumeSpecName: "config-data") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.623739 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627498 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627539 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") on node \"crc\" " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627550 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627582 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.655797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf" (OuterVolumeSpecName: "server-conf") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.666163 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.666335 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2544df99-ce65-431e-b41d-029cd6318622" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622") on node "crc" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.697447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730223 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730258 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730270 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.744274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.833218 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"3251ec13ecf2d816f4249d2d95826865dd55f4c6e4f346e728a8820870c8122f"} Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980286 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980294 4907 scope.go:117] "RemoveContainer" containerID="435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.983347 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40"} Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.983418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.018110 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.039736 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.044310 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.064504 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf" (OuterVolumeSpecName: "persistence") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "pvc-c92dd174-2681-4ccd-ace7-bb768c862acf". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.069745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070395 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070421 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070459 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070466 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070480 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070489 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070527 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070538 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070793 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070806 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.072270 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.085399 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.144984 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145744 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146171 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146300 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146394 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146683 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.147390 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") on node \"crc\" " Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.208485 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.209962 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c92dd174-2681-4ccd-ace7-bb768c862acf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf") on node "crc" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.219741 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.249860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253588 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254363 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254873 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.255296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.257588 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254579 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262845 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.263421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.265345 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.265375 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ec55be154a66d09157b0ca2623a596d4c9f6b8adde5f16f336c822c2282072f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.274694 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.278580 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.280486 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.282910 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.285167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.287313 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.287321 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291485 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291780 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291798 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fl6zh" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291930 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.292043 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.293287 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358504 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358533 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358622 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358648 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358826 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358853 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358870 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358914 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.360302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461242 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461306 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461342 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461697 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461729 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461830 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.462290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.463415 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.463442 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d92d749e8b6234664dd57319b2b5b7962d8bfa8dc2f0d92cbae41209d539d4c4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.464999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.465208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.465710 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.467009 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.467257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.469526 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.470197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.470991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.478501 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.480724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.519917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.716207 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.812049 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" path="/var/lib/kubelet/pods/52cb02a9-7a60-4761-9770-a9b6910f1088/volumes" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.815028 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" path="/var/lib/kubelet/pods/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e/volumes" Jan 27 18:32:44 crc kubenswrapper[4907]: I0127 18:32:44.641686 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Jan 27 18:32:45 crc kubenswrapper[4907]: I0127 18:32:45.020689 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407198 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407758 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407913 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6xh4v_openstack(a67fd41b-79b0-4ab4-86b6-816389597620): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.409351 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6xh4v" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.749023 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.750193 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755640 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755694 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755829 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66dh579h677hfbh58ch597h56h674h5ddh6dhd7h57bh66ch5ddh5bfh5dch5bfh88h565h696h9dhbh546h667h666h5b4h558h654h585h65fh5fbh67fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v86x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8cc0b779-ca13-49be-91c1-ea2eb4a99d9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.787764 4907 scope.go:117] "RemoveContainer" containerID="6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.788227 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.962410 4907 scope.go:117] "RemoveContainer" containerID="dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.066207 4907 scope.go:117] "RemoveContainer" containerID="008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e" Jan 27 18:32:48 crc kubenswrapper[4907]: E0127 18:32:48.127966 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-6xh4v" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.630570 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.646943 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5364919_e030_4b8d_a22d_708b6c7bd0cb.slice/crio-b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3 WatchSource:0}: Error finding container b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3: Status 404 returned error can't find the container with id b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.646971 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.652437 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f8e936e_82a6_49cc_bb09_d247a2d0e47b.slice/crio-fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6 WatchSource:0}: Error finding container fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6: Status 404 returned error can't find the container with id fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.660102 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.667929 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189c0f02_da43_4eb5_9cf1_ff9154e1a952.slice/crio-7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4 WatchSource:0}: Error finding container 7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4: Status 404 returned error can't find the container with id 7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.673524 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.105212 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"92684dc2eb115bb505f833aa3fb3a06a87b54f5594cffbaea4a66b68ceb77d0d"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.106748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.108070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.108097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.110813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.110877 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.140641 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerID="e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c" exitCode=0 Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.140839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.151706 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"0c2ac46d1e93a41bc414cb4514124fb328356ae9f1b768ba585227d541f6220e"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.154836 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1" exitCode=0 Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.154869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.172087 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.179068 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.180090 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.184525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.191953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"8e1f96bb050bc5d4264454553d68034e151e1a7adb65b2bc4cfa829c4501075e"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.263978 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" podStartSLOduration=13.263960604 podStartE2EDuration="13.263960604s" podCreationTimestamp="2026-01-27 18:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:51.254377552 +0000 UTC m=+1626.383660164" watchObservedRunningTime="2026-01-27 18:32:51.263960604 +0000 UTC m=+1626.393243216" Jan 27 18:32:52 crc kubenswrapper[4907]: I0127 18:32:52.205058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1"} Jan 27 18:32:53 crc kubenswrapper[4907]: E0127 18:32:53.446860 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.236174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"570eb6afe3191636018d46ec4cf448c4203df83da283c7465c362786dd4332f2"} Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.236547 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:32:54 crc kubenswrapper[4907]: E0127 18:32:54.238695 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.242037 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1" exitCode=0 Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.242085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1"} Jan 27 18:32:55 crc kubenswrapper[4907]: E0127 18:32:55.254207 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:56 crc kubenswrapper[4907]: I0127 18:32:56.264778 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16"} Jan 27 18:32:56 crc kubenswrapper[4907]: I0127 18:32:56.292808 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f75nb" podStartSLOduration=11.19626883 podStartE2EDuration="16.292791177s" podCreationTimestamp="2026-01-27 18:32:40 +0000 UTC" firstStartedPulling="2026-01-27 18:32:50.158464937 +0000 UTC m=+1625.287747549" lastFinishedPulling="2026-01-27 18:32:55.254987284 +0000 UTC m=+1630.384269896" observedRunningTime="2026-01-27 18:32:56.281000542 +0000 UTC m=+1631.410283164" watchObservedRunningTime="2026-01-27 18:32:56.292791177 +0000 UTC m=+1631.422073789" Jan 27 18:32:58 crc kubenswrapper[4907]: I0127 18:32:58.921716 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.026414 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.027695 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" containerID="cri-o://5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" gracePeriod=10 Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.577432 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.579781 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.597885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598138 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.611296 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699746 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699826 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699870 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700731 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.702155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.702617 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.748597 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.910243 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.314946 4907 generic.go:334] "Generic (PLEG): container finished" podID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerID="5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" exitCode=0 Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.315059 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900"} Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.494893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:33:00 crc kubenswrapper[4907]: W0127 18:33:00.502951 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf7b3c3_995f_48f8_a74f_3ffaf08f6d1e.slice/crio-d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc WatchSource:0}: Error finding container d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc: Status 404 returned error can't find the container with id d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.624927 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.624989 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.688338 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.728240 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.748376 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:00 crc kubenswrapper[4907]: E0127 18:33:00.748714 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827305 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827388 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827495 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827869 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827901 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827923 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.835084 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m" (OuterVolumeSpecName: "kube-api-access-wj74m") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "kube-api-access-wj74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.901031 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.901999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.906094 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.907776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931155 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931188 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931198 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931206 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931217 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.947533 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config" (OuterVolumeSpecName: "config") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.033854 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328108 4907 generic.go:334] "Generic (PLEG): container finished" podID="5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e" containerID="31a2062688fcfc39ec1785f0eb116e327ef0432830cedbbd68c1c695d6bb9644" exitCode=0 Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerDied","Data":"31a2062688fcfc39ec1785f0eb116e327ef0432830cedbbd68c1c695d6bb9644"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328218 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerStarted","Data":"d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330806 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330829 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"4bd8eb5f48ea3f38d33f0dd542b84168a28c90547f3c08b18a3dbbf20455e507"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330883 4907 scope.go:117] "RemoveContainer" containerID="5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.379494 4907 scope.go:117] "RemoveContainer" containerID="f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.386547 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.397488 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.411784 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.464419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.845167 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" path="/var/lib/kubelet/pods/2a5f060b-75dd-4083-badf-a9d208f59b65/volumes" Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.344209 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerStarted","Data":"535db4e83b43e189c59263127328f7feec73639c2d150d1546dad635ff9ed5c3"} Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.344535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.380735 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" podStartSLOduration=3.380715607 podStartE2EDuration="3.380715607s" podCreationTimestamp="2026-01-27 18:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:02.369517259 +0000 UTC m=+1637.498799871" watchObservedRunningTime="2026-01-27 18:33:02.380715607 +0000 UTC m=+1637.509998219" Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.362413 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerStarted","Data":"fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490"} Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.362772 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f75nb" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" containerID="cri-o://9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" gracePeriod=2 Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.419280 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6xh4v" podStartSLOduration=2.530614913 podStartE2EDuration="46.419259111s" podCreationTimestamp="2026-01-27 18:32:17 +0000 UTC" firstStartedPulling="2026-01-27 18:32:18.170578403 +0000 UTC m=+1593.299861015" lastFinishedPulling="2026-01-27 18:33:02.059222581 +0000 UTC m=+1637.188505213" observedRunningTime="2026-01-27 18:33:03.410291456 +0000 UTC m=+1638.539574068" watchObservedRunningTime="2026-01-27 18:33:03.419259111 +0000 UTC m=+1638.548541723" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.380521 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" exitCode=0 Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.380671 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16"} Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.605436 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.753698 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754034 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754192 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities" (OuterVolumeSpecName: "utilities") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.755240 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.766953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq" (OuterVolumeSpecName: "kube-api-access-7vfkq") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "kube-api-access-7vfkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.799306 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.857686 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.858548 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4"} Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400107 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400112 4907 scope.go:117] "RemoveContainer" containerID="9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.401997 4907 generic.go:334] "Generic (PLEG): container finished" podID="a67fd41b-79b0-4ab4-86b6-816389597620" containerID="fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490" exitCode=0 Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.402074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerDied","Data":"fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490"} Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.451164 4907 scope.go:117] "RemoveContainer" containerID="12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.466638 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.480061 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.482270 4907 scope.go:117] "RemoveContainer" containerID="245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.768309 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" path="/var/lib/kubelet/pods/189c0f02-da43-4eb5-9cf1-ff9154e1a952/volumes" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.100687 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.230463 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.231800 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.231973 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.235922 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr" (OuterVolumeSpecName: "kube-api-access-hsgzr") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "kube-api-access-hsgzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.264978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.322653 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data" (OuterVolumeSpecName: "config-data") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335111 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335149 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335161 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432700 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerDied","Data":"05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b"} Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432747 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432777 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.782261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 18:33:08 crc kubenswrapper[4907]: I0127 18:33:08.447319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} Jan 27 18:33:08 crc kubenswrapper[4907]: I0127 18:33:08.475465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.223742032 podStartE2EDuration="47.475446081s" podCreationTimestamp="2026-01-27 18:32:21 +0000 UTC" firstStartedPulling="2026-01-27 18:32:22.791282252 +0000 UTC m=+1597.920564864" lastFinishedPulling="2026-01-27 18:33:08.042986281 +0000 UTC m=+1643.172268913" observedRunningTime="2026-01-27 18:33:08.470894662 +0000 UTC m=+1643.600177334" watchObservedRunningTime="2026-01-27 18:33:08.475446081 +0000 UTC m=+1643.604728703" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.208307 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209214 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="init" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209234 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="init" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209251 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-content" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209260 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-content" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209269 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209276 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209288 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209294 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-utilities" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209313 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-utilities" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209330 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209622 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209661 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209672 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.210548 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.224401 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.239847 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.242851 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.276442 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292165 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292243 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292395 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292482 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292516 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.345600 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.347141 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.360861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405077 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405168 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405314 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409261 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409425 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409467 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409549 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409641 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409922 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.420262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.420275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.421104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.421818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.422235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.436173 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.437807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.438626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.439781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.439819 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512818 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512963 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.513064 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.513121 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.519020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.519861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.523320 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.523403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.530725 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.531824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.536593 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.564372 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.672884 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.914144 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.981473 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.981794 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" containerID="cri-o://b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" gracePeriod=10 Jan 27 18:33:10 crc kubenswrapper[4907]: W0127 18:33:10.184140 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2540a9_525b_46c6_b0ae_23e163484c98.slice/crio-ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b WatchSource:0}: Error finding container ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b: Status 404 returned error can't find the container with id ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.191416 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.509280 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.532753 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerID="b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" exitCode=0 Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.532857 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a"} Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.548614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-668f78b-db9cs" event={"ID":"0d2540a9-525b-46c6-b0ae-23e163484c98","Type":"ContainerStarted","Data":"ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b"} Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.718417 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.003320 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086817 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087018 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087057 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087135 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.096374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw" (OuterVolumeSpecName: "kube-api-access-4bzqw") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "kube-api-access-4bzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.191580 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.191875 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.193257 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.213172 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.222762 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.224145 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.245077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config" (OuterVolumeSpecName: "config") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293944 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293986 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293999 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294010 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294026 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294037 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.563821 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-96749fcd4-hh92n" event={"ID":"effdf66a-d041-45e1-a1f0-bd1367a2d80a","Type":"ContainerStarted","Data":"929bf63f3810d7b9ddf81d81e0065ce2e51f867bef5434e751b2fa7893cb4152"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567620 4907 scope.go:117] "RemoveContainer" containerID="b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567820 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.575225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-668f78b-db9cs" event={"ID":"0d2540a9-525b-46c6-b0ae-23e163484c98","Type":"ContainerStarted","Data":"3790edf5f6d66ff4eae110856a26bda957351b7ab3e5d82a518f0570f5fa97ef"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.575863 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.577780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7b8679c4d-pw2cq" event={"ID":"14d9243a-0abc-40ce-9881-eef907bdafe3","Type":"ContainerStarted","Data":"47981bc5ae9c3b1f065892d0b6ab60013463ce2b0a916d4c859a64ae3368693c"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.611777 4907 scope.go:117] "RemoveContainer" containerID="e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.613151 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-668f78b-db9cs" podStartSLOduration=2.6131333 podStartE2EDuration="2.6131333s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:11.597616539 +0000 UTC m=+1646.726899161" watchObservedRunningTime="2026-01-27 18:33:11.6131333 +0000 UTC m=+1646.742415902" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.661117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.686502 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.774819 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" path="/var/lib/kubelet/pods/f5364919-e030-4b8d-a22d-708b6c7bd0cb/volumes" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.629614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7b8679c4d-pw2cq" event={"ID":"14d9243a-0abc-40ce-9881-eef907bdafe3","Type":"ContainerStarted","Data":"8c03bc940a22e4e9cd8893ab9e72aa3f6028129f8e3d8bd16fa9e93afc489f39"} Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.630131 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.633795 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-96749fcd4-hh92n" event={"ID":"effdf66a-d041-45e1-a1f0-bd1367a2d80a","Type":"ContainerStarted","Data":"501033acab0b35e726127b267bc195027fe219a33b78e00ba15fec516510908a"} Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.633942 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.675083 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7b8679c4d-pw2cq" podStartSLOduration=2.493688766 podStartE2EDuration="4.675062717s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="2026-01-27 18:33:10.543172794 +0000 UTC m=+1645.672455406" lastFinishedPulling="2026-01-27 18:33:12.724546745 +0000 UTC m=+1647.853829357" observedRunningTime="2026-01-27 18:33:13.66883547 +0000 UTC m=+1648.798118082" watchObservedRunningTime="2026-01-27 18:33:13.675062717 +0000 UTC m=+1648.804345329" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.698108 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-96749fcd4-hh92n" podStartSLOduration=2.710636772 podStartE2EDuration="4.698087992s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="2026-01-27 18:33:10.74046062 +0000 UTC m=+1645.869743232" lastFinishedPulling="2026-01-27 18:33:12.72791184 +0000 UTC m=+1647.857194452" observedRunningTime="2026-01-27 18:33:13.691506475 +0000 UTC m=+1648.820789087" watchObservedRunningTime="2026-01-27 18:33:13.698087992 +0000 UTC m=+1648.827370604" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.748824 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:13 crc kubenswrapper[4907]: E0127 18:33:13.749277 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.318264 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.398777 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.399013 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-667f9867c-2tvqc" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" containerID="cri-o://7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" gracePeriod=60 Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.558786 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.630460 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.630735 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" containerID="cri-o://2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" gracePeriod=60 Jan 27 18:33:22 crc kubenswrapper[4907]: I0127 18:33:22.740769 4907 generic.go:334] "Generic (PLEG): container finished" podID="5f8e936e-82a6-49cc-bb09-d247a2d0e47b" containerID="50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702" exitCode=0 Jan 27 18:33:22 crc kubenswrapper[4907]: I0127 18:33:22.740825 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerDied","Data":"50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.772804 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"513a21ffd0f39e6cc4dfae09c542ebb5143c73fcdc3dcc16edea83c1cd58a7f4"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.773434 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.776337 4907 generic.go:334] "Generic (PLEG): container finished" podID="021272d4-b660-4c16-b9a6-befd84abe2cc" containerID="befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5" exitCode=0 Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.776382 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerDied","Data":"befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.825263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.825240871 podStartE2EDuration="42.825240871s" podCreationTimestamp="2026-01-27 18:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:23.809198575 +0000 UTC m=+1658.938481187" watchObservedRunningTime="2026-01-27 18:33:23.825240871 +0000 UTC m=+1658.954523473" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.792934 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"34c310d50042bf2bf53bdfbfbfbebf3fc3c04eb3f90e2d1cc6453ecb10918aa6"} Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.794630 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.850086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.224:8000/healthcheck\": read tcp 10.217.0.2:53748->10.217.0.224:8000: read: connection reset by peer" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.854106 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.85408834 podStartE2EDuration="43.85408834s" podCreationTimestamp="2026-01-27 18:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:24.852758002 +0000 UTC m=+1659.982040644" watchObservedRunningTime="2026-01-27 18:33:24.85408834 +0000 UTC m=+1659.983370952" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.905888 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-667f9867c-2tvqc" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.223:8004/healthcheck\": read tcp 10.217.0.2:50236->10.217.0.223:8004: read: connection reset by peer" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826107 4907 generic.go:334] "Generic (PLEG): container finished" podID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerID="7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" exitCode=0 Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerDied","Data":"7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerDied","Data":"982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826655 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832704 4907 generic.go:334] "Generic (PLEG): container finished" podID="97762448-336d-4609-a574-310d1b61aa04" containerID="2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" exitCode=0 Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerDied","Data":"2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerDied","Data":"932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832849 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.847725 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.851255 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.994868 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995082 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995821 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995860 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995936 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996103 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.001842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.008211 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.013735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l" (OuterVolumeSpecName: "kube-api-access-5928l") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "kube-api-access-5928l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.020948 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t" (OuterVolumeSpecName: "kube-api-access-5nq4t") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "kube-api-access-5nq4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.048429 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099456 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099488 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099499 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099508 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099516 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.162958 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data" (OuterVolumeSpecName: "config-data") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.167984 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.184666 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.201725 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data" (OuterVolumeSpecName: "config-data") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.201926 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203610 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203638 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203649 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203660 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203672 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.208716 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.214122 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.305281 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.305321 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.748699 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:26 crc kubenswrapper[4907]: E0127 18:33:26.749360 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.845724 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.845731 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.900373 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.914537 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.935023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.939241 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:27 crc kubenswrapper[4907]: I0127 18:33:27.765537 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97762448-336d-4609-a574-310d1b61aa04" path="/var/lib/kubelet/pods/97762448-336d-4609-a574-310d1b61aa04/volumes" Jan 27 18:33:27 crc kubenswrapper[4907]: I0127 18:33:27.766260 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" path="/var/lib/kubelet/pods/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15/volumes" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.587955 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.644261 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.644764 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" containerID="cri-o://f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" gracePeriod=60 Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.692802 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="init" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693343 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="init" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693377 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693385 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693417 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693424 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693439 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693445 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693715 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693739 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693759 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.694757 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700282 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700771 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.704987 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792842 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895575 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895603 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895747 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.901323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.903977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.907163 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.928420 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:30 crc kubenswrapper[4907]: I0127 18:33:30.018285 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.671016 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.683244 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.684998 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.685051 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:33:31 crc kubenswrapper[4907]: I0127 18:33:31.067606 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:31 crc kubenswrapper[4907]: I0127 18:33:31.912058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerStarted","Data":"d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977"} Jan 27 18:33:35 crc kubenswrapper[4907]: I0127 18:33:35.271812 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6d47577fc9-fz5kg" podUID="bfb5201d-eb44-42cb-a5ab-49520cc1e741" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.439183 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.454243 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.536927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.538543 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.542437 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.569814 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607665 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.608046 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.709965 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710005 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710208 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.716135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.719473 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.729685 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.733049 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.778458 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" path="/var/lib/kubelet/pods/c16f7a68-05a6-494f-94ce-1774118b0592/volumes" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.881096 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.659994 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.661808 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.663015 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.663075 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:33:40 crc kubenswrapper[4907]: I0127 18:33:40.749702 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.750103 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.481965 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.724764 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.740495 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:33:44 crc kubenswrapper[4907]: I0127 18:33:44.086966 4907 generic.go:334] "Generic (PLEG): container finished" podID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" exitCode=0 Jan 27 18:33:44 crc kubenswrapper[4907]: I0127 18:33:44.087092 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerDied","Data":"f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.010620 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:46 crc kubenswrapper[4907]: W0127 18:33:46.012258 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee2938a8_fe59_4c5a_abd0_7957ecb6b796.slice/crio-86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944 WatchSource:0}: Error finding container 86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944: Status 404 returned error can't find the container with id 86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944 Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.054305 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.110564 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerDied","Data":"04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.110633 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.111817 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerStarted","Data":"86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.241073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348116 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348239 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348527 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348712 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.353529 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.361902 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv" (OuterVolumeSpecName: "kube-api-access-7r6bv") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "kube-api-access-7r6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.394137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.418253 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data" (OuterVolumeSpecName: "config-data") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451919 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451965 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451977 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451990 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.126474 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.126487 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerStarted","Data":"53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129"} Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.155248 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" podStartSLOduration=3.170996426 podStartE2EDuration="18.155224067s" podCreationTimestamp="2026-01-27 18:33:29 +0000 UTC" firstStartedPulling="2026-01-27 18:33:31.066733025 +0000 UTC m=+1666.196015637" lastFinishedPulling="2026-01-27 18:33:46.050960676 +0000 UTC m=+1681.180243278" observedRunningTime="2026-01-27 18:33:47.145905452 +0000 UTC m=+1682.275188134" watchObservedRunningTime="2026-01-27 18:33:47.155224067 +0000 UTC m=+1682.284506689" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.188425 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.214391 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.763547 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" path="/var/lib/kubelet/pods/51ff4a9d-d39e-4357-a248-4b93e5eeaf13/volumes" Jan 27 18:33:50 crc kubenswrapper[4907]: I0127 18:33:50.273470 4907 scope.go:117] "RemoveContainer" containerID="9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd" Jan 27 18:33:51 crc kubenswrapper[4907]: I0127 18:33:51.311512 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" containerID="cri-o://4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" gracePeriod=604791 Jan 27 18:33:51 crc kubenswrapper[4907]: I0127 18:33:51.997083 4907 scope.go:117] "RemoveContainer" containerID="16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed" Jan 27 18:33:52 crc kubenswrapper[4907]: I0127 18:33:52.748282 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:52 crc kubenswrapper[4907]: E0127 18:33:52.748917 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.096761 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.749557 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.749818 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.756871 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.045328 4907 scope.go:117] "RemoveContainer" containerID="3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.080708 4907 scope.go:117] "RemoveContainer" containerID="eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.154862 4907 scope.go:117] "RemoveContainer" containerID="aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.321042 4907 scope.go:117] "RemoveContainer" containerID="dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.327477 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:33:55 crc kubenswrapper[4907]: I0127 18:33:55.260520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerStarted","Data":"71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918"} Jan 27 18:33:55 crc kubenswrapper[4907]: I0127 18:33:55.289159 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zhncj" podStartSLOduration=9.982930682 podStartE2EDuration="18.289142024s" podCreationTimestamp="2026-01-27 18:33:37 +0000 UTC" firstStartedPulling="2026-01-27 18:33:46.01661674 +0000 UTC m=+1681.145899352" lastFinishedPulling="2026-01-27 18:33:54.322828082 +0000 UTC m=+1689.452110694" observedRunningTime="2026-01-27 18:33:55.276309759 +0000 UTC m=+1690.405592381" watchObservedRunningTime="2026-01-27 18:33:55.289142024 +0000 UTC m=+1690.418424636" Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.371721 4907 generic.go:334] "Generic (PLEG): container finished" podID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerID="4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" exitCode=0 Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.371827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759"} Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.374788 4907 generic.go:334] "Generic (PLEG): container finished" podID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerID="53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129" exitCode=0 Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.374834 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerDied","Data":"53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129"} Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.395759 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea"} Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.396778 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.479546 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504545 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504790 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504886 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504915 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504957 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505006 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505060 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505171 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.515331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.525410 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.526396 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.526595 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.528029 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.528061 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.532172 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.536654 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info" (OuterVolumeSpecName: "pod-info") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.538822 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.548453 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp" (OuterVolumeSpecName: "kube-api-access-b8hgp") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "kube-api-access-b8hgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.552695 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633335 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633372 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633384 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633396 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633411 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.686065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data" (OuterVolumeSpecName: "config-data") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.723328 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf" (OuterVolumeSpecName: "server-conf") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.735762 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.735794 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.813801 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.840424 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.902038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0" (OuterVolumeSpecName: "persistence") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "pvc-49bee2ea-921f-42f7-b022-927bee51e4f0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.943458 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") on node \"crc\" " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.012042 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.012844 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49bee2ea-921f-42f7-b022-927bee51e4f0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0") on node "crc" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.045164 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.286285 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351419 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351458 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351651 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.358728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.358888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj" (OuterVolumeSpecName: "kube-api-access-c9wdj") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "kube-api-access-c9wdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.403182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory" (OuterVolumeSpecName: "inventory") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.408669 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.419296 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.420015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.424886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerDied","Data":"d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977"} Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.424930 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454337 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454368 4907 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454379 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454391 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.511300 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.549833 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.566757 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567274 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567290 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567318 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567325 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567340 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567346 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="setup-container" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567370 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="setup-container" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567609 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567629 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567648 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.568419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.574763 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575231 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575346 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575486 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.595324 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.610550 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.612546 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.631171 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.657965 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658273 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658296 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658393 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658488 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658592 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658624 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658646 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.760574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.760911 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761029 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761527 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762847 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763881 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.764125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.766514 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.767952 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.769324 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" path="/var/lib/kubelet/pods/45d050d2-eeb4-4603-a6c4-1cbdd454ea35/volumes" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.770887 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.785207 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.787095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.787202 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.788618 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.788647 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34bf333d34756f1b83dde2eb30c2397a83048a027d2708516d2de7b96e990e99/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.794517 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.795136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.796223 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.875695 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.891362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.945453 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:02 crc kubenswrapper[4907]: W0127 18:34:02.633237 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2872f844_3f1a_4d9b_8f96_5cc01d0cae12.slice/crio-3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8 WatchSource:0}: Error finding container 3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8: Status 404 returned error can't find the container with id 3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8 Jan 27 18:34:02 crc kubenswrapper[4907]: I0127 18:34:02.652937 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:02 crc kubenswrapper[4907]: I0127 18:34:02.668344 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:03 crc kubenswrapper[4907]: I0127 18:34:03.443180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"cdbce7184be087e11e87f58ef85e79cb799451ef6f3067b5737634a652ed4b9c"} Jan 27 18:34:03 crc kubenswrapper[4907]: I0127 18:34:03.444873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerStarted","Data":"3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8"} Jan 27 18:34:04 crc kubenswrapper[4907]: I0127 18:34:04.749402 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:04 crc kubenswrapper[4907]: E0127 18:34:04.750108 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:04 crc kubenswrapper[4907]: I0127 18:34:04.927418 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.470224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.473105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerStarted","Data":"727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.475147 4907 generic.go:334] "Generic (PLEG): container finished" podID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerID="71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918" exitCode=0 Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.475173 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerDied","Data":"71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.516425 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" podStartSLOduration=3.468642793 podStartE2EDuration="4.516401809s" podCreationTimestamp="2026-01-27 18:34:01 +0000 UTC" firstStartedPulling="2026-01-27 18:34:02.637228937 +0000 UTC m=+1697.766511549" lastFinishedPulling="2026-01-27 18:34:03.684987953 +0000 UTC m=+1698.814270565" observedRunningTime="2026-01-27 18:34:05.512900229 +0000 UTC m=+1700.642182841" watchObservedRunningTime="2026-01-27 18:34:05.516401809 +0000 UTC m=+1700.645684441" Jan 27 18:34:06 crc kubenswrapper[4907]: I0127 18:34:06.932794 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037497 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037809 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.038045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.044802 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc" (OuterVolumeSpecName: "kube-api-access-b7knc") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "kube-api-access-b7knc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.052147 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts" (OuterVolumeSpecName: "scripts") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.073089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.083686 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data" (OuterVolumeSpecName: "config-data") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140946 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140990 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140999 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.141009 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.498912 4907 generic.go:334] "Generic (PLEG): container finished" podID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerID="727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde" exitCode=0 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.498996 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerDied","Data":"727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde"} Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.506008 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerDied","Data":"86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944"} Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.506057 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.507015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.678869 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.679496 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" containerID="cri-o://e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.679671 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" containerID="cri-o://d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.680302 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" containerID="cri-o://5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.680337 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" containerID="cri-o://2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" gracePeriod=30 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521046 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" exitCode=0 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521082 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" exitCode=0 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521261 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86"} Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521291 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.068529 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188381 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188829 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188956 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.196452 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp" (OuterVolumeSpecName: "kube-api-access-k8zkp") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "kube-api-access-k8zkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: E0127 18:34:09.224046 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam podName:2872f844-3f1a-4d9b-8f96-5cc01d0cae12 nodeName:}" failed. No retries permitted until 2026-01-27 18:34:09.724014154 +0000 UTC m=+1704.853296776 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12") : error deleting /var/lib/kubelet/pods/2872f844-3f1a-4d9b-8f96-5cc01d0cae12/volume-subpaths: remove /var/lib/kubelet/pods/2872f844-3f1a-4d9b-8f96-5cc01d0cae12/volume-subpaths: no such file or directory Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.227305 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory" (OuterVolumeSpecName: "inventory") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.292073 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.292118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.532606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerDied","Data":"3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.533385 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.532634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.540965 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" exitCode=0 Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.541039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.804697 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.820173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.907974 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226043 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:10 crc kubenswrapper[4907]: E0127 18:34:10.226712 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226739 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: E0127 18:34:10.226762 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226771 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.227058 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.227079 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.228161 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231214 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231414 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231872 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.237696 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.325897 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.325996 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.326108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.326251 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.428793 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.428948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.429084 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.429135 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.433247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.433376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.439000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.448353 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.562357 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.178024 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.661857 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" exitCode=0 Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.661968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff"} Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.682461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerStarted","Data":"beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6"} Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.994754 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073403 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073432 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073610 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073761 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.081388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts" (OuterVolumeSpecName: "scripts") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.086574 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w" (OuterVolumeSpecName: "kube-api-access-n2p9w") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "kube-api-access-n2p9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.152962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178648 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178683 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178695 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.184277 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.232896 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data" (OuterVolumeSpecName: "config-data") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.250081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281236 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281280 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281292 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697217 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8"} Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697273 4907 scope.go:117] "RemoveContainer" containerID="d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697287 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.708244 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerStarted","Data":"cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c"} Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.743912 4907 scope.go:117] "RemoveContainer" containerID="2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.748845 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" podStartSLOduration=2.289007297 podStartE2EDuration="2.748800593s" podCreationTimestamp="2026-01-27 18:34:10 +0000 UTC" firstStartedPulling="2026-01-27 18:34:11.19754701 +0000 UTC m=+1706.326829622" lastFinishedPulling="2026-01-27 18:34:11.657340306 +0000 UTC m=+1706.786622918" observedRunningTime="2026-01-27 18:34:12.738391518 +0000 UTC m=+1707.867674130" watchObservedRunningTime="2026-01-27 18:34:12.748800593 +0000 UTC m=+1707.878083205" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.770340 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.777840 4907 scope.go:117] "RemoveContainer" containerID="5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.781338 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799470 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799932 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799947 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799966 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799972 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799998 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800004 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.800019 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800026 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800251 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800287 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800298 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800309 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.802355 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807060 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807302 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807432 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807635 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.808435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.822901 4907 scope.go:117] "RemoveContainer" containerID="e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.838659 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894436 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894549 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894747 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.895020 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.996973 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997331 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997375 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003928 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.005171 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.021326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.022308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.124688 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.627013 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:13 crc kubenswrapper[4907]: W0127 18:34:13.628793 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15bed332_56fa_45cd_8ab4_5d4cced0e671.slice/crio-25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b WatchSource:0}: Error finding container 25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b: Status 404 returned error can't find the container with id 25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.720903 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b"} Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.761733 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" path="/var/lib/kubelet/pods/a6c7b40d-63e2-4fbf-a59d-44c106984d76/volumes" Jan 27 18:34:14 crc kubenswrapper[4907]: I0127 18:34:14.738855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"f2436e7517e5de93bc206c51de13598ed68ba3e09d8dc335e519ad4419f25ae2"} Jan 27 18:34:15 crc kubenswrapper[4907]: I0127 18:34:15.766248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"8d2bdac5f867820cd01edaaeaec4c586fafdbbaf53c761b99e3633c45064d2de"} Jan 27 18:34:16 crc kubenswrapper[4907]: I0127 18:34:16.791745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"b16708456ade029178668a362e73bf08cbec6aa9acf892393eb318e6e9616a1d"} Jan 27 18:34:18 crc kubenswrapper[4907]: I0127 18:34:18.816735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"618b205d959f06d236345b27bae6b8bdbe1b0c426c556e879eed5b93d4300dad"} Jan 27 18:34:18 crc kubenswrapper[4907]: I0127 18:34:18.854599 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.808798527 podStartE2EDuration="6.854570322s" podCreationTimestamp="2026-01-27 18:34:12 +0000 UTC" firstStartedPulling="2026-01-27 18:34:13.630519451 +0000 UTC m=+1708.759802063" lastFinishedPulling="2026-01-27 18:34:17.676291246 +0000 UTC m=+1712.805573858" observedRunningTime="2026-01-27 18:34:18.83868235 +0000 UTC m=+1713.967964962" watchObservedRunningTime="2026-01-27 18:34:18.854570322 +0000 UTC m=+1713.983852944" Jan 27 18:34:19 crc kubenswrapper[4907]: I0127 18:34:19.748798 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:19 crc kubenswrapper[4907]: E0127 18:34:19.749371 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:32 crc kubenswrapper[4907]: I0127 18:34:32.748020 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:32 crc kubenswrapper[4907]: E0127 18:34:32.748841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:37 crc kubenswrapper[4907]: I0127 18:34:37.036918 4907 generic.go:334] "Generic (PLEG): container finished" podID="0e0246bb-5533-495d-849f-617b346c8fde" containerID="19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d" exitCode=0 Jan 27 18:34:37 crc kubenswrapper[4907]: I0127 18:34:37.037029 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerDied","Data":"19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d"} Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.050544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"b92cf935e7c9be0c4e1f3ca984bf7c162cb346bc3030b781dc5f4c893afd96b9"} Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.051071 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.093178 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.093156345 podStartE2EDuration="37.093156345s" podCreationTimestamp="2026-01-27 18:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:34:38.084026075 +0000 UTC m=+1733.213308687" watchObservedRunningTime="2026-01-27 18:34:38.093156345 +0000 UTC m=+1733.222438957" Jan 27 18:34:47 crc kubenswrapper[4907]: I0127 18:34:47.748725 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:47 crc kubenswrapper[4907]: E0127 18:34:47.749379 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:51 crc kubenswrapper[4907]: I0127 18:34:51.948720 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 18:34:52 crc kubenswrapper[4907]: I0127 18:34:52.005130 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.769435 4907 scope.go:117] "RemoveContainer" containerID="9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.806190 4907 scope.go:117] "RemoveContainer" containerID="47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.858189 4907 scope.go:117] "RemoveContainer" containerID="86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.905476 4907 scope.go:117] "RemoveContainer" containerID="28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.934905 4907 scope.go:117] "RemoveContainer" containerID="4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" Jan 27 18:34:55 crc kubenswrapper[4907]: I0127 18:34:55.904892 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" containerID="cri-o://7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" gracePeriod=604797 Jan 27 18:34:59 crc kubenswrapper[4907]: I0127 18:34:59.911829 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 27 18:35:01 crc kubenswrapper[4907]: I0127 18:35:01.748121 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:01 crc kubenswrapper[4907]: E0127 18:35:01.748959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.319910 4907 generic.go:334] "Generic (PLEG): container finished" podID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerID="7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" exitCode=0 Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.320009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1"} Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.644581 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742097 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743142 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743346 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743454 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743496 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743607 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743637 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743676 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.751517 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.757408 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc" (OuterVolumeSpecName: "kube-api-access-qrrgc") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "kube-api-access-qrrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.764505 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.765119 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.765453 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info" (OuterVolumeSpecName: "pod-info") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.778608 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.778999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.810514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data" (OuterVolumeSpecName: "config-data") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.839749 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e" (OuterVolumeSpecName: "persistence") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870722 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870793 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870806 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870824 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870834 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870846 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870858 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870867 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870941 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") on node \"crc\" " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.922748 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.922924 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e") on node "crc" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.933909 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf" (OuterVolumeSpecName: "server-conf") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.951606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973752 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973810 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973841 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340486 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"f438ce9452f05a2c33576c461be5d8342246dc4a389096e1ff8d110a343a2c82"} Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340611 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340757 4907 scope.go:117] "RemoveContainer" containerID="7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.377695 4907 scope.go:117] "RemoveContainer" containerID="f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.418072 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.457771 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.480660 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: E0127 18:35:03.481137 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481152 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: E0127 18:35:03.481183 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="setup-container" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481189 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="setup-container" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481460 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.488231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.540059 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596706 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596869 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597063 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597247 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608366 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608523 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608657 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712778 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712991 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713101 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713265 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713310 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713422 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713534 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.714155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.714193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.716536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.717007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.719781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.720611 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.737114 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e84612870a5c0c4830950c12b2fd6510f31530f3fd62287fde6ecf77067364b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.722400 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.723085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.745674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.752366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.788387 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" path="/var/lib/kubelet/pods/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce/volumes" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.854645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.876982 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:04 crc kubenswrapper[4907]: I0127 18:35:04.549045 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:05 crc kubenswrapper[4907]: I0127 18:35:05.367651 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"6aa71536be738a4b95b38c16f4c4fe61914c2fa67302373fde7c5fa831d6a1f1"} Jan 27 18:35:07 crc kubenswrapper[4907]: I0127 18:35:07.392492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765"} Jan 27 18:35:12 crc kubenswrapper[4907]: I0127 18:35:12.748834 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:12 crc kubenswrapper[4907]: E0127 18:35:12.753542 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:26 crc kubenswrapper[4907]: I0127 18:35:26.748091 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:26 crc kubenswrapper[4907]: E0127 18:35:26.749152 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:39 crc kubenswrapper[4907]: I0127 18:35:39.766115 4907 generic.go:334] "Generic (PLEG): container finished" podID="0be9e879-df48-4aea-9f07-b297cabca4f3" containerID="c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765" exitCode=0 Jan 27 18:35:39 crc kubenswrapper[4907]: I0127 18:35:39.766243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerDied","Data":"c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765"} Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.748633 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:40 crc kubenswrapper[4907]: E0127 18:35:40.749600 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.782713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"f07e603f1a8c006b9a7f92c74fe7cb34ea5edaa3d3f2a4619b58674baf6a3b5d"} Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.782941 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.816079 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.816055087 podStartE2EDuration="37.816055087s" podCreationTimestamp="2026-01-27 18:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:35:40.810720225 +0000 UTC m=+1795.940002837" watchObservedRunningTime="2026-01-27 18:35:40.816055087 +0000 UTC m=+1795.945337709" Jan 27 18:35:53 crc kubenswrapper[4907]: I0127 18:35:53.749653 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:53 crc kubenswrapper[4907]: E0127 18:35:53.750809 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:53 crc kubenswrapper[4907]: I0127 18:35:53.879787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.071436 4907 scope.go:117] "RemoveContainer" containerID="2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.102382 4907 scope.go:117] "RemoveContainer" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.129240 4907 scope.go:117] "RemoveContainer" containerID="7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.817083 4907 scope.go:117] "RemoveContainer" containerID="6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.890204 4907 scope.go:117] "RemoveContainer" containerID="f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.983915 4907 scope.go:117] "RemoveContainer" containerID="72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" Jan 27 18:36:07 crc kubenswrapper[4907]: I0127 18:36:07.748291 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:07 crc kubenswrapper[4907]: E0127 18:36:07.749134 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:22 crc kubenswrapper[4907]: I0127 18:36:22.748628 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:22 crc kubenswrapper[4907]: E0127 18:36:22.749700 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:36 crc kubenswrapper[4907]: I0127 18:36:36.748690 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:36 crc kubenswrapper[4907]: E0127 18:36:36.749483 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.065296 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.081214 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.092702 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.106148 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.760529 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" path="/var/lib/kubelet/pods/5edef5c0-5919-4ddd-93cd-65b569c78603/volumes" Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.762020 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" path="/var/lib/kubelet/pods/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5/volumes" Jan 27 18:36:51 crc kubenswrapper[4907]: I0127 18:36:51.749256 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:51 crc kubenswrapper[4907]: E0127 18:36:51.749940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.035374 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.048254 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.058981 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.069232 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.761762 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" path="/var/lib/kubelet/pods/a0adeee4-a225-49f2-8a87-f44aa772d5f2/volumes" Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.763132 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1662136-4082-412a-9846-92ea9aff9350" path="/var/lib/kubelet/pods/e1662136-4082-412a-9846-92ea9aff9350/volumes" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.164786 4907 scope.go:117] "RemoveContainer" containerID="44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.190906 4907 scope.go:117] "RemoveContainer" containerID="4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.259527 4907 scope.go:117] "RemoveContainer" containerID="7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.311742 4907 scope.go:117] "RemoveContainer" containerID="f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.376579 4907 scope.go:117] "RemoveContainer" containerID="439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.404028 4907 scope.go:117] "RemoveContainer" containerID="e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.444145 4907 scope.go:117] "RemoveContainer" containerID="bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.497101 4907 scope.go:117] "RemoveContainer" containerID="88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.563248 4907 scope.go:117] "RemoveContainer" containerID="59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.048464 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.068595 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.084894 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.101009 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.116392 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.131455 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.141783 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.153385 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.163390 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.175101 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.185929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.196040 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.761636 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" path="/var/lib/kubelet/pods/0ef0a2ee-9212-41c9-b2b9-d59602779eef/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.762569 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" path="/var/lib/kubelet/pods/3dfbf931-f21b-4652-8640-0208df4b40cc/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.763812 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" path="/var/lib/kubelet/pods/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.764401 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" path="/var/lib/kubelet/pods/94f0fdef-b14b-4204-be1e-90a5d19c96e7/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.765422 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7319b76-e25b-4370-ac3e-641efd764024" path="/var/lib/kubelet/pods/d7319b76-e25b-4370-ac3e-641efd764024/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.766017 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" path="/var/lib/kubelet/pods/f1904c81-5de8-431a-9304-5b4ba1771c73/volumes" Jan 27 18:37:03 crc kubenswrapper[4907]: I0127 18:37:03.748701 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:37:05 crc kubenswrapper[4907]: I0127 18:37:05.018458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} Jan 27 18:37:21 crc kubenswrapper[4907]: I0127 18:37:21.196216 4907 generic.go:334] "Generic (PLEG): container finished" podID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerID="cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c" exitCode=0 Jan 27 18:37:21 crc kubenswrapper[4907]: I0127 18:37:21.196297 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerDied","Data":"cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c"} Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.724658 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798583 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798636 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798705 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.872128 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.875896 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d" (OuterVolumeSpecName: "kube-api-access-br56d") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "kube-api-access-br56d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.902055 4907 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.902195 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.906622 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.910626 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory" (OuterVolumeSpecName: "inventory") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.004598 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.004634 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.217986 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerDied","Data":"beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6"} Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.218038 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.218043 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.306596 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:23 crc kubenswrapper[4907]: E0127 18:37:23.307136 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.307154 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.307376 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.308261 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311861 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311973 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.319648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.353966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439841 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439960 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542575 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542689 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.550856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.556729 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.568667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.626328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:24 crc kubenswrapper[4907]: I0127 18:37:24.212819 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:25 crc kubenswrapper[4907]: I0127 18:37:25.244728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerStarted","Data":"5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f"} Jan 27 18:37:26 crc kubenswrapper[4907]: I0127 18:37:26.258060 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerStarted","Data":"d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034"} Jan 27 18:37:26 crc kubenswrapper[4907]: I0127 18:37:26.291760 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" podStartSLOduration=1.789014147 podStartE2EDuration="3.291733896s" podCreationTimestamp="2026-01-27 18:37:23 +0000 UTC" firstStartedPulling="2026-01-27 18:37:24.246231641 +0000 UTC m=+1899.375514253" lastFinishedPulling="2026-01-27 18:37:25.74895139 +0000 UTC m=+1900.878234002" observedRunningTime="2026-01-27 18:37:26.28102956 +0000 UTC m=+1901.410312192" watchObservedRunningTime="2026-01-27 18:37:26.291733896 +0000 UTC m=+1901.421016508" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.055542 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.073680 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.095066 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.102929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.115760 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.128460 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.138028 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.146969 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.155827 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.164601 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.173763 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.182967 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.192167 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.201738 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.211253 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.219973 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.229201 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.237922 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.761296 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" path="/var/lib/kubelet/pods/32b7a898-5d57-496a-8ad1-380b636e3629/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.762906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" path="/var/lib/kubelet/pods/421865e2-2878-4bc4-9480-7afb5e7133fd/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.764169 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" path="/var/lib/kubelet/pods/701eaff9-db27-4bff-975c-b8ebf034725f/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.766566 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" path="/var/lib/kubelet/pods/7844ef4e-92dd-4ea6-a792-b255290ef833/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.771448 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" path="/var/lib/kubelet/pods/85c8faae-95fb-4533-b45c-51e91bb95947/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.772935 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" path="/var/lib/kubelet/pods/ac5cca69-8afc-417f-9f22-93c279730bf7/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.774764 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" path="/var/lib/kubelet/pods/b54f9573-0bd6-4133-872a-b9e73129d654/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.776147 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" path="/var/lib/kubelet/pods/c3998964-67eb-4adb-912d-a6367ae3beaf/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.778254 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" path="/var/lib/kubelet/pods/d206e054-cdc8-4a59-9de8-93bfeae80700/volumes" Jan 27 18:37:44 crc kubenswrapper[4907]: I0127 18:37:44.030980 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:37:44 crc kubenswrapper[4907]: I0127 18:37:44.043062 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:37:45 crc kubenswrapper[4907]: I0127 18:37:45.766903 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" path="/var/lib/kubelet/pods/2dd8fea0-24a6-4212-875a-5cf95105f549/volumes" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.069250 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.081233 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.783350 4907 scope.go:117] "RemoveContainer" containerID="10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.834908 4907 scope.go:117] "RemoveContainer" containerID="fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.998646 4907 scope.go:117] "RemoveContainer" containerID="f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.103826 4907 scope.go:117] "RemoveContainer" containerID="4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.137848 4907 scope.go:117] "RemoveContainer" containerID="96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.186367 4907 scope.go:117] "RemoveContainer" containerID="68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.280242 4907 scope.go:117] "RemoveContainer" containerID="3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.312097 4907 scope.go:117] "RemoveContainer" containerID="902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.336951 4907 scope.go:117] "RemoveContainer" containerID="948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.400629 4907 scope.go:117] "RemoveContainer" containerID="c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.424284 4907 scope.go:117] "RemoveContainer" containerID="6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.471042 4907 scope.go:117] "RemoveContainer" containerID="3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.560205 4907 scope.go:117] "RemoveContainer" containerID="7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.591203 4907 scope.go:117] "RemoveContainer" containerID="63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.619180 4907 scope.go:117] "RemoveContainer" containerID="40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.657912 4907 scope.go:117] "RemoveContainer" containerID="a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.778064 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" path="/var/lib/kubelet/pods/1e2cf5dd-be65-4237-b77e-9bcc84cd26de/volumes" Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.040082 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.052759 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.765366 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e539a06-3352-4163-a259-6fd53182fe02" path="/var/lib/kubelet/pods/9e539a06-3352-4163-a259-6fd53182fe02/volumes" Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.034448 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.044988 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.058379 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.069897 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.037388 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.049801 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.762856 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" path="/var/lib/kubelet/pods/3d3838ba-a929-4aab-a58d-dd4f39628f00/volumes" Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.764206 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" path="/var/lib/kubelet/pods/b745a073-e4cf-471d-92ce-ac5da568b38e/volumes" Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.766415 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" path="/var/lib/kubelet/pods/de23a4c9-a62e-4523-8480-b19f3f10f586/volumes" Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.043750 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.070723 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.761325 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" path="/var/lib/kubelet/pods/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c/volumes" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.192052 4907 scope.go:117] "RemoveContainer" containerID="89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.219074 4907 scope.go:117] "RemoveContainer" containerID="c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.283459 4907 scope.go:117] "RemoveContainer" containerID="b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.352041 4907 scope.go:117] "RemoveContainer" containerID="7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.409432 4907 scope.go:117] "RemoveContainer" containerID="3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.487034 4907 scope.go:117] "RemoveContainer" containerID="8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc" Jan 27 18:39:20 crc kubenswrapper[4907]: I0127 18:39:20.540946 4907 generic.go:334] "Generic (PLEG): container finished" podID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerID="d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034" exitCode=0 Jan 27 18:39:20 crc kubenswrapper[4907]: I0127 18:39:20.541046 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerDied","Data":"d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034"} Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.033392 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220494 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220866 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.227141 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw" (OuterVolumeSpecName: "kube-api-access-7g9qw") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "kube-api-access-7g9qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.266119 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory" (OuterVolumeSpecName: "inventory") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.277467 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324139 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324175 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324190 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566023 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerDied","Data":"5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f"} Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566064 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.658586 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:22 crc kubenswrapper[4907]: E0127 18:39:22.659105 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.659125 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.659495 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.660454 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.670937 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.670979 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.671105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.671438 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.677821 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.837303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.837913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.838047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.945006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.948050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.959311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.994043 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.556753 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.557462 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.582678 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerStarted","Data":"2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627"} Jan 27 18:39:24 crc kubenswrapper[4907]: I0127 18:39:24.593026 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerStarted","Data":"27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4"} Jan 27 18:39:24 crc kubenswrapper[4907]: I0127 18:39:24.613358 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" podStartSLOduration=2.202721019 podStartE2EDuration="2.613338921s" podCreationTimestamp="2026-01-27 18:39:22 +0000 UTC" firstStartedPulling="2026-01-27 18:39:23.557203694 +0000 UTC m=+2018.686486306" lastFinishedPulling="2026-01-27 18:39:23.967821596 +0000 UTC m=+2019.097104208" observedRunningTime="2026-01-27 18:39:24.608205014 +0000 UTC m=+2019.737487666" watchObservedRunningTime="2026-01-27 18:39:24.613338921 +0000 UTC m=+2019.742621543" Jan 27 18:39:26 crc kubenswrapper[4907]: I0127 18:39:26.521250 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:39:26 crc kubenswrapper[4907]: I0127 18:39:26.521761 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.733900 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.738405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.776337 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874461 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874511 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874534 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976834 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.977257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.977330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.997297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.062483 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.559397 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.882154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.882568 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"d870dde3209bdd6b610732d708524f925a35bfaed65822d0ede635108e245da8"} Jan 27 18:39:49 crc kubenswrapper[4907]: I0127 18:39:49.895415 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" exitCode=0 Jan 27 18:39:49 crc kubenswrapper[4907]: I0127 18:39:49.895664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} Jan 27 18:39:50 crc kubenswrapper[4907]: I0127 18:39:50.911513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} Jan 27 18:39:52 crc kubenswrapper[4907]: I0127 18:39:52.934928 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" exitCode=0 Jan 27 18:39:52 crc kubenswrapper[4907]: I0127 18:39:52.935047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} Jan 27 18:39:53 crc kubenswrapper[4907]: I0127 18:39:53.961509 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} Jan 27 18:39:53 crc kubenswrapper[4907]: I0127 18:39:53.992983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fddws" podStartSLOduration=3.559044312 podStartE2EDuration="6.992961549s" podCreationTimestamp="2026-01-27 18:39:47 +0000 UTC" firstStartedPulling="2026-01-27 18:39:49.898742506 +0000 UTC m=+2045.028025118" lastFinishedPulling="2026-01-27 18:39:53.332659743 +0000 UTC m=+2048.461942355" observedRunningTime="2026-01-27 18:39:53.982245794 +0000 UTC m=+2049.111528406" watchObservedRunningTime="2026-01-27 18:39:53.992961549 +0000 UTC m=+2049.122244171" Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.051483 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.063633 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.073328 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.082537 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.521121 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.521189 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.039712 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.050096 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.062230 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.073380 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.085853 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.097097 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.109013 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.122943 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.768219 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" path="/var/lib/kubelet/pods/1567baee-fe0b-481f-9aca-c424237d77fd/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.771212 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" path="/var/lib/kubelet/pods/22bda35b-bb7e-40c5-a263-56fdb4a28784/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.772243 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" path="/var/lib/kubelet/pods/743ace74-8ac2-43c7-807c-47379f8c50f4/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.773411 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" path="/var/lib/kubelet/pods/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.775263 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" path="/var/lib/kubelet/pods/9fd63a47-2bbf-455b-8732-8d489507a2a0/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.776170 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" path="/var/lib/kubelet/pods/db79947d-82c1-4b66-8f0d-d34b96ff9a16/volumes" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.063577 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.063631 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.737216 4907 scope.go:117] "RemoveContainer" containerID="5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.770770 4907 scope.go:117] "RemoveContainer" containerID="34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.842787 4907 scope.go:117] "RemoveContainer" containerID="d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.902665 4907 scope.go:117] "RemoveContainer" containerID="2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.981023 4907 scope.go:117] "RemoveContainer" containerID="0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4" Jan 27 18:39:59 crc kubenswrapper[4907]: I0127 18:39:59.086007 4907 scope.go:117] "RemoveContainer" containerID="9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b" Jan 27 18:39:59 crc kubenswrapper[4907]: I0127 18:39:59.119067 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fddws" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" probeResult="failure" output=< Jan 27 18:39:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:39:59 crc kubenswrapper[4907]: > Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.134488 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.187328 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.374545 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.138282 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fddws" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" containerID="cri-o://aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" gracePeriod=2 Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.729252 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.862953 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.863468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.863520 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.866201 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities" (OuterVolumeSpecName: "utilities") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.869783 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57" (OuterVolumeSpecName: "kube-api-access-mzx57") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "kube-api-access-mzx57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.919062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966043 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966089 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966101 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149451 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" exitCode=0 Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149518 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"d870dde3209bdd6b610732d708524f925a35bfaed65822d0ede635108e245da8"} Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149533 4907 scope.go:117] "RemoveContainer" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.177300 4907 scope.go:117] "RemoveContainer" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.188576 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.198753 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.216580 4907 scope.go:117] "RemoveContainer" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251314 4907 scope.go:117] "RemoveContainer" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.251833 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": container with ID starting with aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2 not found: ID does not exist" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251876 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} err="failed to get container status \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": rpc error: code = NotFound desc = could not find container \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": container with ID starting with aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251903 4907 scope.go:117] "RemoveContainer" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.252493 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": container with ID starting with f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22 not found: ID does not exist" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.252523 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} err="failed to get container status \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": rpc error: code = NotFound desc = could not find container \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": container with ID starting with f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.252539 4907 scope.go:117] "RemoveContainer" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.252942 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": container with ID starting with 5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274 not found: ID does not exist" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.253005 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} err="failed to get container status \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": rpc error: code = NotFound desc = could not find container \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": container with ID starting with 5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.765926 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" path="/var/lib/kubelet/pods/ec66903e-4bd3-45bc-915d-4c46b7f50550/volumes" Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.039440 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.048965 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.765983 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" path="/var/lib/kubelet/pods/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4/volumes" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521693 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521740 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521781 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.522671 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.522717 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" gracePeriod=600 Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.363673 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" exitCode=0 Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.363765 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.364430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.364472 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.030340 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.042836 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.056175 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.064050 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.759198 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" path="/var/lib/kubelet/pods/3cabef78-d5b3-4e61-9aa1-0f0529701fa0/volumes" Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.760099 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" path="/var/lib/kubelet/pods/b430d70c-f51d-4ffd-856f-4035b5d053b7/volumes" Jan 27 18:40:47 crc kubenswrapper[4907]: I0127 18:40:47.603386 4907 generic.go:334] "Generic (PLEG): container finished" podID="0aabc401-314e-438d-920e-1f984949944c" containerID="27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4" exitCode=0 Jan 27 18:40:47 crc kubenswrapper[4907]: I0127 18:40:47.603637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerDied","Data":"27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4"} Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.106158 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.265955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.266420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.266568 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.274214 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l" (OuterVolumeSpecName: "kube-api-access-wrj8l") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "kube-api-access-wrj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.297853 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.305761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory" (OuterVolumeSpecName: "inventory") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369651 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369692 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369708 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerDied","Data":"2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627"} Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626051 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626419 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.715609 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716431 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716449 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716682 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716690 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716726 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-content" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716734 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-content" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716751 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-utilities" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716759 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-utilities" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716999 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.717034 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.717936 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.719911 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720264 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720393 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720502 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.762088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.992679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.992728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.003605 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.037341 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.062898 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.077326 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.747236 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.655544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerStarted","Data":"dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476"} Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.656250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerStarted","Data":"7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9"} Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.688113 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" podStartSLOduration=2.262152461 podStartE2EDuration="2.688092692s" podCreationTimestamp="2026-01-27 18:40:49 +0000 UTC" firstStartedPulling="2026-01-27 18:40:50.754720505 +0000 UTC m=+2105.884003117" lastFinishedPulling="2026-01-27 18:40:51.180660736 +0000 UTC m=+2106.309943348" observedRunningTime="2026-01-27 18:40:51.679913889 +0000 UTC m=+2106.809196501" watchObservedRunningTime="2026-01-27 18:40:51.688092692 +0000 UTC m=+2106.817375304" Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.765523 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" path="/var/lib/kubelet/pods/73c0d1c7-cc84-4792-be06-ce4535d854f1/volumes" Jan 27 18:40:57 crc kubenswrapper[4907]: I0127 18:40:57.719855 4907 generic.go:334] "Generic (PLEG): container finished" podID="907876b3-4761-4612-9c26-3479222c6b72" containerID="dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476" exitCode=0 Jan 27 18:40:57 crc kubenswrapper[4907]: I0127 18:40:57.720345 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerDied","Data":"dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476"} Jan 27 18:40:58 crc kubenswrapper[4907]: I0127 18:40:58.032269 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:40:58 crc kubenswrapper[4907]: I0127 18:40:58.044479 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.276979 4907 scope.go:117] "RemoveContainer" containerID="8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.426251 4907 scope.go:117] "RemoveContainer" containerID="bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.467292 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.493867 4907 scope.go:117] "RemoveContainer" containerID="1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550086 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550169 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.558427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk" (OuterVolumeSpecName: "kube-api-access-xf8fk") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "kube-api-access-xf8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.561000 4907 scope.go:117] "RemoveContainer" containerID="bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.588063 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.598145 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory" (OuterVolumeSpecName: "inventory") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660126 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660197 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660234 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerDied","Data":"7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9"} Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739205 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739250 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.760951 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" path="/var/lib/kubelet/pods/8f9b4dfd-c141-4a97-9656-3f48e7a04309/volumes" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.829055 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:40:59 crc kubenswrapper[4907]: E0127 18:40:59.829720 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.829743 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.830049 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.831054 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833205 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833265 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833651 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.838382 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.844631 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.864786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.865335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.865524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967452 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967541 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.971950 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.972264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.983701 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.150621 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.719027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.758085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerStarted","Data":"2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405"} Jan 27 18:41:01 crc kubenswrapper[4907]: I0127 18:41:01.776119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerStarted","Data":"43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd"} Jan 27 18:41:01 crc kubenswrapper[4907]: I0127 18:41:01.806403 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" podStartSLOduration=2.377653381 podStartE2EDuration="2.806384821s" podCreationTimestamp="2026-01-27 18:40:59 +0000 UTC" firstStartedPulling="2026-01-27 18:41:00.71786733 +0000 UTC m=+2115.847149942" lastFinishedPulling="2026-01-27 18:41:01.14659877 +0000 UTC m=+2116.275881382" observedRunningTime="2026-01-27 18:41:01.792639739 +0000 UTC m=+2116.921922351" watchObservedRunningTime="2026-01-27 18:41:01.806384821 +0000 UTC m=+2116.935667443" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.670392 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.673830 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.679494 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.679652 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.680131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.685411 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781534 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781776 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.783046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.783064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.805000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.005145 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:04 crc kubenswrapper[4907]: W0127 18:41:04.496633 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dcc99a8_ae36_4946_9470_e14bf668096c.slice/crio-a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e WatchSource:0}: Error finding container a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e: Status 404 returned error can't find the container with id a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.505513 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809484 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" exitCode=0 Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5"} Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809833 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e"} Jan 27 18:41:05 crc kubenswrapper[4907]: I0127 18:41:05.857719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} Jan 27 18:41:11 crc kubenswrapper[4907]: I0127 18:41:11.921402 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" exitCode=0 Jan 27 18:41:11 crc kubenswrapper[4907]: I0127 18:41:11.921453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} Jan 27 18:41:12 crc kubenswrapper[4907]: I0127 18:41:12.933608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} Jan 27 18:41:12 crc kubenswrapper[4907]: I0127 18:41:12.962012 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkhsv" podStartSLOduration=2.431891185 podStartE2EDuration="9.961990072s" podCreationTimestamp="2026-01-27 18:41:03 +0000 UTC" firstStartedPulling="2026-01-27 18:41:04.811571269 +0000 UTC m=+2119.940853881" lastFinishedPulling="2026-01-27 18:41:12.341670146 +0000 UTC m=+2127.470952768" observedRunningTime="2026-01-27 18:41:12.950219486 +0000 UTC m=+2128.079502098" watchObservedRunningTime="2026-01-27 18:41:12.961990072 +0000 UTC m=+2128.091272684" Jan 27 18:41:14 crc kubenswrapper[4907]: I0127 18:41:14.005656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:14 crc kubenswrapper[4907]: I0127 18:41:14.006059 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:15 crc kubenswrapper[4907]: I0127 18:41:15.070249 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:15 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:15 crc kubenswrapper[4907]: > Jan 27 18:41:25 crc kubenswrapper[4907]: I0127 18:41:25.059802 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:25 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:25 crc kubenswrapper[4907]: > Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.181327 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.187995 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.196346 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286866 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.388977 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389045 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389479 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.390060 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.411078 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.515571 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:32 crc kubenswrapper[4907]: I0127 18:41:32.078489 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:32 crc kubenswrapper[4907]: I0127 18:41:32.151411 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"58a3f7e832276e4b80160099365d2143d3059adc76e75c5badff9b50d717c5c6"} Jan 27 18:41:33 crc kubenswrapper[4907]: I0127 18:41:33.163068 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" exitCode=0 Jan 27 18:41:33 crc kubenswrapper[4907]: I0127 18:41:33.163146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860"} Jan 27 18:41:34 crc kubenswrapper[4907]: I0127 18:41:34.210420 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.044302 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.056197 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:35 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:35 crc kubenswrapper[4907]: > Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.060415 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.221543 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" exitCode=0 Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.221595 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.760211 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52256d78-f327-4af2-9452-0483ad62dea0" path="/var/lib/kubelet/pods/52256d78-f327-4af2-9452-0483ad62dea0/volumes" Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.245134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.272902 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pdrz" podStartSLOduration=3.373450988 podStartE2EDuration="6.272881458s" podCreationTimestamp="2026-01-27 18:41:31 +0000 UTC" firstStartedPulling="2026-01-27 18:41:33.165092197 +0000 UTC m=+2148.294374809" lastFinishedPulling="2026-01-27 18:41:36.064522667 +0000 UTC m=+2151.193805279" observedRunningTime="2026-01-27 18:41:37.26279 +0000 UTC m=+2152.392072632" watchObservedRunningTime="2026-01-27 18:41:37.272881458 +0000 UTC m=+2152.402164070" Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.572184 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:37 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:37 crc kubenswrapper[4907]: > Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.574524 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:37 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:37 crc kubenswrapper[4907]: > Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.516185 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.516804 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.577956 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.304015 4907 generic.go:334] "Generic (PLEG): container finished" podID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerID="43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd" exitCode=0 Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.304155 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerDied","Data":"43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd"} Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.399015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.456113 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.833014 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.934878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.935065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.935084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.941920 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9" (OuterVolumeSpecName: "kube-api-access-j5gt9") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "kube-api-access-j5gt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.969905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory" (OuterVolumeSpecName: "inventory") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.970377 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038457 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038817 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038832 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.067255 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.123739 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerDied","Data":"2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405"} Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335409 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335573 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335653 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pdrz" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" containerID="cri-o://bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" gracePeriod=2 Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.438773 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:44 crc kubenswrapper[4907]: E0127 18:41:44.439305 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.439324 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.439631 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.440526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.442736 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.442836 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.443047 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.444086 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.450727 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554599 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554727 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554803 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.656839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.656987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.657248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.662728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.668194 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.679922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.819026 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.950342 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.066912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.067343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.067399 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.068773 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities" (OuterVolumeSpecName: "utilities") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.072932 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz" (OuterVolumeSpecName: "kube-api-access-v4xjz") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "kube-api-access-v4xjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.093282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173412 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173471 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173482 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352222 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" exitCode=0 Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352272 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352326 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"58a3f7e832276e4b80160099365d2143d3059adc76e75c5badff9b50d717c5c6"} Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352347 4907 scope.go:117] "RemoveContainer" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352348 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.391734 4907 scope.go:117] "RemoveContainer" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.424284 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.435132 4907 scope.go:117] "RemoveContainer" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.438201 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.449260 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.449525 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" containerID="cri-o://57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" gracePeriod=2 Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.479921 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480094 4907 scope.go:117] "RemoveContainer" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.480652 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": container with ID starting with bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441 not found: ID does not exist" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480698 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} err="failed to get container status \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": rpc error: code = NotFound desc = could not find container \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": container with ID starting with bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441 not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480728 4907 scope.go:117] "RemoveContainer" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.481016 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": container with ID starting with 9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed not found: ID does not exist" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481042 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} err="failed to get container status \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": rpc error: code = NotFound desc = could not find container \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": container with ID starting with 9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481057 4907 scope.go:117] "RemoveContainer" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.481321 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": container with ID starting with cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860 not found: ID does not exist" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481355 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860"} err="failed to get container status \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": rpc error: code = NotFound desc = could not find container \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": container with ID starting with cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860 not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.790211 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" path="/var/lib/kubelet/pods/8612ce7f-2609-418c-a907-fc9d4a14d650/volumes" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.011623 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.045773 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.209352 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.209985 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.210267 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.211700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities" (OuterVolumeSpecName: "utilities") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.214238 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2" (OuterVolumeSpecName: "kube-api-access-q5wd2") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "kube-api-access-q5wd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.313106 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.313147 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.342938 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367290 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" exitCode=0 Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367335 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367353 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367409 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367433 4907 scope.go:117] "RemoveContainer" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.375842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerStarted","Data":"b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.375887 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerStarted","Data":"2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.401872 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" podStartSLOduration=1.8510723740000001 podStartE2EDuration="2.401848476s" podCreationTimestamp="2026-01-27 18:41:44 +0000 UTC" firstStartedPulling="2026-01-27 18:41:45.490518089 +0000 UTC m=+2160.619800691" lastFinishedPulling="2026-01-27 18:41:46.041294181 +0000 UTC m=+2161.170576793" observedRunningTime="2026-01-27 18:41:46.392368416 +0000 UTC m=+2161.521651038" watchObservedRunningTime="2026-01-27 18:41:46.401848476 +0000 UTC m=+2161.531131088" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.404490 4907 scope.go:117] "RemoveContainer" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.415057 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.424521 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.435940 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.443711 4907 scope.go:117] "RemoveContainer" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.467317 4907 scope.go:117] "RemoveContainer" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.468128 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": container with ID starting with 57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c not found: ID does not exist" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468168 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} err="failed to get container status \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": rpc error: code = NotFound desc = could not find container \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": container with ID starting with 57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c not found: ID does not exist" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468195 4907 scope.go:117] "RemoveContainer" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.468590 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": container with ID starting with 00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691 not found: ID does not exist" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468628 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} err="failed to get container status \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": rpc error: code = NotFound desc = could not find container \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": container with ID starting with 00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691 not found: ID does not exist" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468654 4907 scope.go:117] "RemoveContainer" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.469157 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": container with ID starting with 5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5 not found: ID does not exist" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.469198 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5"} err="failed to get container status \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": rpc error: code = NotFound desc = could not find container \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": container with ID starting with 5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5 not found: ID does not exist" Jan 27 18:41:47 crc kubenswrapper[4907]: I0127 18:41:47.764020 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" path="/var/lib/kubelet/pods/6dcc99a8-ae36-4946-9470-e14bf668096c/volumes" Jan 27 18:41:59 crc kubenswrapper[4907]: I0127 18:41:59.734095 4907 scope.go:117] "RemoveContainer" containerID="1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3" Jan 27 18:41:59 crc kubenswrapper[4907]: I0127 18:41:59.781352 4907 scope.go:117] "RemoveContainer" containerID="9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b" Jan 27 18:42:26 crc kubenswrapper[4907]: I0127 18:42:26.521388 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:42:26 crc kubenswrapper[4907]: I0127 18:42:26.522027 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:42:41 crc kubenswrapper[4907]: I0127 18:42:41.040015 4907 generic.go:334] "Generic (PLEG): container finished" podID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerID="b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6" exitCode=0 Jan 27 18:42:41 crc kubenswrapper[4907]: I0127 18:42:41.040497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerDied","Data":"b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6"} Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.535698 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.633920 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.634219 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.634318 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.639955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p" (OuterVolumeSpecName: "kube-api-access-d5z7p") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "kube-api-access-d5z7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.686595 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory" (OuterVolumeSpecName: "inventory") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.686700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737068 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737103 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737116 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerDied","Data":"2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8"} Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063179 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063239 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.161880 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162444 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162470 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162516 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162526 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162876 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162895 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162913 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162931 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162937 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162953 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162962 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162969 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163203 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163219 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163235 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.164075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166373 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166700 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166810 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.167673 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.185321 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248764 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.355536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.356922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.371871 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.481771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:44 crc kubenswrapper[4907]: I0127 18:42:44.011224 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:44 crc kubenswrapper[4907]: W0127 18:42:44.014833 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71334cb5_9354_4f68_91bf_8631e5fa045a.slice/crio-e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e WatchSource:0}: Error finding container e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e: Status 404 returned error can't find the container with id e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e Jan 27 18:42:44 crc kubenswrapper[4907]: I0127 18:42:44.079000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerStarted","Data":"e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e"} Jan 27 18:42:46 crc kubenswrapper[4907]: I0127 18:42:46.104333 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerStarted","Data":"5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d"} Jan 27 18:42:46 crc kubenswrapper[4907]: I0127 18:42:46.133920 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" podStartSLOduration=1.5240840009999999 podStartE2EDuration="3.133890963s" podCreationTimestamp="2026-01-27 18:42:43 +0000 UTC" firstStartedPulling="2026-01-27 18:42:44.016871412 +0000 UTC m=+2219.146154024" lastFinishedPulling="2026-01-27 18:42:45.626678344 +0000 UTC m=+2220.755960986" observedRunningTime="2026-01-27 18:42:46.118713461 +0000 UTC m=+2221.247996133" watchObservedRunningTime="2026-01-27 18:42:46.133890963 +0000 UTC m=+2221.263173605" Jan 27 18:42:53 crc kubenswrapper[4907]: I0127 18:42:53.182326 4907 generic.go:334] "Generic (PLEG): container finished" podID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerID="5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d" exitCode=0 Jan 27 18:42:53 crc kubenswrapper[4907]: I0127 18:42:53.182427 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerDied","Data":"5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d"} Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.682160 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.756648 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.756884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.757068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.762769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8" (OuterVolumeSpecName: "kube-api-access-rp9c8") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "kube-api-access-rp9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.789108 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.790353 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860271 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860319 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860335 4907 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerDied","Data":"e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e"} Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208600 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208662 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.279256 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:55 crc kubenswrapper[4907]: E0127 18:42:55.279792 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.279815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.280053 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.283026 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.286380 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.286884 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.287107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.287218 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.313095 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373285 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.477795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.477926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.478090 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.482330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.482417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.505446 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.607305 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.204934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:56 crc kubenswrapper[4907]: W0127 18:42:56.207039 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff08f4dc_f4e3_4e83_b922_32b6296fbee0.slice/crio-5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f WatchSource:0}: Error finding container 5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f: Status 404 returned error can't find the container with id 5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.222070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerStarted","Data":"5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f"} Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.521274 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.521592 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4907]: I0127 18:42:57.238168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerStarted","Data":"5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a"} Jan 27 18:42:57 crc kubenswrapper[4907]: I0127 18:42:57.270950 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" podStartSLOduration=1.847879176 podStartE2EDuration="2.270927754s" podCreationTimestamp="2026-01-27 18:42:55 +0000 UTC" firstStartedPulling="2026-01-27 18:42:56.212339886 +0000 UTC m=+2231.341622508" lastFinishedPulling="2026-01-27 18:42:56.635388474 +0000 UTC m=+2231.764671086" observedRunningTime="2026-01-27 18:42:57.257614764 +0000 UTC m=+2232.386897396" watchObservedRunningTime="2026-01-27 18:42:57.270927754 +0000 UTC m=+2232.400210376" Jan 27 18:43:06 crc kubenswrapper[4907]: I0127 18:43:06.346533 4907 generic.go:334] "Generic (PLEG): container finished" podID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerID="5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a" exitCode=0 Jan 27 18:43:06 crc kubenswrapper[4907]: I0127 18:43:06.346642 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerDied","Data":"5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a"} Jan 27 18:43:07 crc kubenswrapper[4907]: I0127 18:43:07.892978 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019156 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019205 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019249 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.028938 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9" (OuterVolumeSpecName: "kube-api-access-5phf9") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "kube-api-access-5phf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.074592 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.074682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.085736 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory" (OuterVolumeSpecName: "inventory") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.089972 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122335 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122369 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122380 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerDied","Data":"5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f"} Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372438 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372136 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.471215 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:08 crc kubenswrapper[4907]: E0127 18:43:08.472340 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.472362 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.472967 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.475336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.478936 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480585 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480789 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480935 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.504955 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.635950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.636211 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.636508 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738779 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738852 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.743695 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.744094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.757238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.802080 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.332266 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.381842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerStarted","Data":"c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7"} Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.762486 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" path="/var/lib/kubelet/pods/a67fd41b-79b0-4ab4-86b6-816389597620/volumes" Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.992721 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.996061 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.027028 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076817 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.180147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.180129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.197973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.322008 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.401256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerStarted","Data":"237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324"} Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.425768 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" podStartSLOduration=1.9737207890000001 podStartE2EDuration="2.425744194s" podCreationTimestamp="2026-01-27 18:43:08 +0000 UTC" firstStartedPulling="2026-01-27 18:43:09.338494128 +0000 UTC m=+2244.467776730" lastFinishedPulling="2026-01-27 18:43:09.790517523 +0000 UTC m=+2244.919800135" observedRunningTime="2026-01-27 18:43:10.42035898 +0000 UTC m=+2245.549641592" watchObservedRunningTime="2026-01-27 18:43:10.425744194 +0000 UTC m=+2245.555026806" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.905044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413519 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" exitCode=0 Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f"} Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413867 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"557910ef70730e660beb16aea62a520fa40106c7bfd5cda588ce4a03758df88d"} Jan 27 18:43:13 crc kubenswrapper[4907]: I0127 18:43:13.438874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} Jan 27 18:43:14 crc kubenswrapper[4907]: I0127 18:43:14.455961 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" exitCode=0 Jan 27 18:43:14 crc kubenswrapper[4907]: I0127 18:43:14.456041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} Jan 27 18:43:15 crc kubenswrapper[4907]: I0127 18:43:15.483879 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} Jan 27 18:43:15 crc kubenswrapper[4907]: I0127 18:43:15.524366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjvlr" podStartSLOduration=3.062846815 podStartE2EDuration="6.52434101s" podCreationTimestamp="2026-01-27 18:43:09 +0000 UTC" firstStartedPulling="2026-01-27 18:43:11.415419456 +0000 UTC m=+2246.544702068" lastFinishedPulling="2026-01-27 18:43:14.876913611 +0000 UTC m=+2250.006196263" observedRunningTime="2026-01-27 18:43:15.505926844 +0000 UTC m=+2250.635209456" watchObservedRunningTime="2026-01-27 18:43:15.52434101 +0000 UTC m=+2250.653623632" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.322423 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.323006 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.408020 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.543203 4907 generic.go:334] "Generic (PLEG): container finished" podID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerID="237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324" exitCode=0 Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.543286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerDied","Data":"237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324"} Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.615286 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.673162 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.039811 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.123771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.123825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.124039 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.130327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5" (OuterVolumeSpecName: "kube-api-access-khpc5") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "kube-api-access-khpc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.157793 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.163846 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory" (OuterVolumeSpecName: "inventory") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227170 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227202 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227211 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.568910 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerDied","Data":"c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7"} Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.569311 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.569091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjvlr" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" containerID="cri-o://1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" gracePeriod=2 Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.568953 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.695634 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:22 crc kubenswrapper[4907]: E0127 18:43:22.696435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.696459 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.696779 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.697787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.701852 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702059 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702088 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702122 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702174 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702227 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702354 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702451 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702543 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.744181 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840454 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840842 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841020 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841083 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841232 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841364 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841459 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841505 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943701 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943787 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943848 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943912 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943935 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943953 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944374 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944400 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944623 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.949576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.949919 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.951598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.952451 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.952529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.953161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.953743 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.955604 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956159 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956516 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.957745 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.958170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.961124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.963813 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.968796 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.055125 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.188720 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354263 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.356317 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities" (OuterVolumeSpecName: "utilities") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.356752 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.374427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st" (OuterVolumeSpecName: "kube-api-access-9n6st") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "kube-api-access-9n6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.460127 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588207 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" exitCode=0 Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588263 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588293 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588316 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"557910ef70730e660beb16aea62a520fa40106c7bfd5cda588ce4a03758df88d"} Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588351 4907 scope.go:117] "RemoveContainer" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.626142 4907 scope.go:117] "RemoveContainer" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.671461 4907 scope.go:117] "RemoveContainer" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.703892 4907 scope.go:117] "RemoveContainer" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.704420 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": container with ID starting with 1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb not found: ID does not exist" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.704508 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} err="failed to get container status \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": rpc error: code = NotFound desc = could not find container \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": container with ID starting with 1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.704649 4907 scope.go:117] "RemoveContainer" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.705110 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": container with ID starting with 62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475 not found: ID does not exist" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705167 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} err="failed to get container status \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": rpc error: code = NotFound desc = could not find container \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": container with ID starting with 62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475 not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705204 4907 scope.go:117] "RemoveContainer" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.705665 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": container with ID starting with 96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f not found: ID does not exist" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705707 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f"} err="failed to get container status \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": rpc error: code = NotFound desc = could not find container \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": container with ID starting with 96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: W0127 18:43:23.709303 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cadb1da_1dd2_49ac_a171_c672c006bfa8.slice/crio-1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b WatchSource:0}: Error finding container 1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b: Status 404 returned error can't find the container with id 1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.721106 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.129197 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.178434 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.233509 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.247427 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.604159 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerStarted","Data":"2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed"} Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.604303 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerStarted","Data":"1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b"} Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.630899 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" podStartSLOduration=2.059303322 podStartE2EDuration="2.630871867s" podCreationTimestamp="2026-01-27 18:43:22 +0000 UTC" firstStartedPulling="2026-01-27 18:43:23.71220498 +0000 UTC m=+2258.841487592" lastFinishedPulling="2026-01-27 18:43:24.283773495 +0000 UTC m=+2259.413056137" observedRunningTime="2026-01-27 18:43:24.623503727 +0000 UTC m=+2259.752786339" watchObservedRunningTime="2026-01-27 18:43:24.630871867 +0000 UTC m=+2259.760154519" Jan 27 18:43:25 crc kubenswrapper[4907]: I0127 18:43:25.769329 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" path="/var/lib/kubelet/pods/a7401251-23ae-4ff6-8e3f-b40f4d072626/volumes" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522028 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522138 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522225 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.523950 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.524136 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" gracePeriod=600 Jan 27 18:43:26 crc kubenswrapper[4907]: E0127 18:43:26.651844 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648880 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" exitCode=0 Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648965 4907 scope.go:117] "RemoveContainer" containerID="659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.649841 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:27 crc kubenswrapper[4907]: E0127 18:43:27.650148 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:39 crc kubenswrapper[4907]: I0127 18:43:39.748546 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:39 crc kubenswrapper[4907]: E0127 18:43:39.749782 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:52 crc kubenswrapper[4907]: I0127 18:43:52.748415 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:52 crc kubenswrapper[4907]: E0127 18:43:52.749186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:59 crc kubenswrapper[4907]: I0127 18:43:59.997895 4907 scope.go:117] "RemoveContainer" containerID="fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490" Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.046026 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.059329 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.748527 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:07 crc kubenswrapper[4907]: E0127 18:44:07.748991 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.761689 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" path="/var/lib/kubelet/pods/ee2938a8-fe59-4c5a-abd0-7957ecb6b796/volumes" Jan 27 18:44:12 crc kubenswrapper[4907]: I0127 18:44:12.165209 4907 generic.go:334] "Generic (PLEG): container finished" podID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerID="2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed" exitCode=0 Jan 27 18:44:12 crc kubenswrapper[4907]: I0127 18:44:12.165314 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerDied","Data":"2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed"} Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.706908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750837 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750919 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750954 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751046 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751085 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751118 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751186 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751270 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751366 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751387 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.778088 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.779523 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.780231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782535 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782662 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd" (OuterVolumeSpecName: "kube-api-access-ksxrd") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "kube-api-access-ksxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782774 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.785791 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.786294 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.788952 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.827767 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.827831 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.842924 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.843009 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory" (OuterVolumeSpecName: "inventory") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.843824 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.844659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854023 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854065 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854076 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854089 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854099 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854109 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854119 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854128 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854136 4907 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854144 4907 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854154 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854164 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854173 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854182 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854192 4907 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854201 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.185956 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerDied","Data":"1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b"} Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.186001 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.186063 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.289510 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290097 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-utilities" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290114 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-utilities" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290130 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290138 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290160 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290166 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290176 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-content" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290183 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-content" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290364 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290379 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.291184 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.294290 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.294776 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.295018 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.296526 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.299713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.305498 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369749 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369849 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.370041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.370079 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472181 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472405 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.473355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.477005 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.477197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.480907 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.493836 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.606721 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:15 crc kubenswrapper[4907]: I0127 18:44:15.178892 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:15 crc kubenswrapper[4907]: I0127 18:44:15.200041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerStarted","Data":"848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca"} Jan 27 18:44:16 crc kubenswrapper[4907]: I0127 18:44:16.212296 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerStarted","Data":"72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e"} Jan 27 18:44:16 crc kubenswrapper[4907]: I0127 18:44:16.234972 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" podStartSLOduration=1.575798606 podStartE2EDuration="2.234951275s" podCreationTimestamp="2026-01-27 18:44:14 +0000 UTC" firstStartedPulling="2026-01-27 18:44:15.184806196 +0000 UTC m=+2310.314088808" lastFinishedPulling="2026-01-27 18:44:15.843958865 +0000 UTC m=+2310.973241477" observedRunningTime="2026-01-27 18:44:16.225722553 +0000 UTC m=+2311.355005175" watchObservedRunningTime="2026-01-27 18:44:16.234951275 +0000 UTC m=+2311.364233887" Jan 27 18:44:20 crc kubenswrapper[4907]: I0127 18:44:20.747979 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:20 crc kubenswrapper[4907]: E0127 18:44:20.748997 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:33 crc kubenswrapper[4907]: I0127 18:44:33.748826 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:33 crc kubenswrapper[4907]: E0127 18:44:33.750168 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:47 crc kubenswrapper[4907]: I0127 18:44:47.749166 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:47 crc kubenswrapper[4907]: E0127 18:44:47.750194 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.109791 4907 scope.go:117] "RemoveContainer" containerID="71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.183137 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.185036 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.187654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.194488 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.196324 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.243449 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.243785 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.244001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.346691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.346876 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.347267 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.348020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.353194 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.375374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.520904 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.994861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: W0127 18:45:00.996862 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8fea3de_b1db_4c31_8636_329b2d296f02.slice/crio-b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a WatchSource:0}: Error finding container b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a: Status 404 returned error can't find the container with id b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.729841 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerStarted","Data":"12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc"} Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.730201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerStarted","Data":"b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a"} Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.749041 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:01 crc kubenswrapper[4907]: E0127 18:45:01.749555 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:02 crc kubenswrapper[4907]: I0127 18:45:02.741683 4907 generic.go:334] "Generic (PLEG): container finished" podID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerID="12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc" exitCode=0 Jan 27 18:45:02 crc kubenswrapper[4907]: I0127 18:45:02.742044 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerDied","Data":"12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc"} Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.161433 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.335473 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.335533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.336661 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.337309 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.341219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.371911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs" (OuterVolumeSpecName: "kube-api-access-78tzs") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "kube-api-access-78tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447433 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447487 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447506 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.753070 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.771808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerDied","Data":"b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a"} Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.771965 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a" Jan 27 18:45:04 crc kubenswrapper[4907]: I0127 18:45:04.260223 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:45:04 crc kubenswrapper[4907]: I0127 18:45:04.270672 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:45:05 crc kubenswrapper[4907]: I0127 18:45:05.774162 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" path="/var/lib/kubelet/pods/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf/volumes" Jan 27 18:45:15 crc kubenswrapper[4907]: I0127 18:45:15.748047 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:15 crc kubenswrapper[4907]: E0127 18:45:15.749226 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:25 crc kubenswrapper[4907]: I0127 18:45:25.004262 4907 generic.go:334] "Generic (PLEG): container finished" podID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerID="72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e" exitCode=0 Jan 27 18:45:25 crc kubenswrapper[4907]: I0127 18:45:25.004338 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerDied","Data":"72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e"} Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.542488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654214 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654440 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654495 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654666 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.664923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.665083 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2" (OuterVolumeSpecName: "kube-api-access-gzxc2") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "kube-api-access-gzxc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.696405 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.697129 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory" (OuterVolumeSpecName: "inventory") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.701709 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757201 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757231 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757242 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757252 4907 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757262 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.026745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerDied","Data":"848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca"} Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.027104 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.026802 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.202626 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:27 crc kubenswrapper[4907]: E0127 18:45:27.203409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203431 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: E0127 18:45:27.203495 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203507 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203832 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203854 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.205112 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210422 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.211906 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.216234 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.373755 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.373804 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377331 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480286 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480402 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480835 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.484900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.484965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.485745 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.500081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.500392 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.505150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.557444 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.150644 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.162281 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.749380 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:28 crc kubenswrapper[4907]: E0127 18:45:28.750032 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:29 crc kubenswrapper[4907]: I0127 18:45:29.054893 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerStarted","Data":"e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1"} Jan 27 18:45:30 crc kubenswrapper[4907]: I0127 18:45:30.064918 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerStarted","Data":"bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793"} Jan 27 18:45:30 crc kubenswrapper[4907]: I0127 18:45:30.107709 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" podStartSLOduration=1.9877750349999999 podStartE2EDuration="3.107681675s" podCreationTimestamp="2026-01-27 18:45:27 +0000 UTC" firstStartedPulling="2026-01-27 18:45:28.162036763 +0000 UTC m=+2383.291319375" lastFinishedPulling="2026-01-27 18:45:29.281943413 +0000 UTC m=+2384.411226015" observedRunningTime="2026-01-27 18:45:30.085183896 +0000 UTC m=+2385.214466508" watchObservedRunningTime="2026-01-27 18:45:30.107681675 +0000 UTC m=+2385.236964307" Jan 27 18:45:39 crc kubenswrapper[4907]: I0127 18:45:39.748773 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:39 crc kubenswrapper[4907]: E0127 18:45:39.749726 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:50 crc kubenswrapper[4907]: I0127 18:45:50.749025 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:50 crc kubenswrapper[4907]: E0127 18:45:50.749828 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:00 crc kubenswrapper[4907]: I0127 18:46:00.226134 4907 scope.go:117] "RemoveContainer" containerID="52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79" Jan 27 18:46:01 crc kubenswrapper[4907]: I0127 18:46:01.748983 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:01 crc kubenswrapper[4907]: E0127 18:46:01.751405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:15 crc kubenswrapper[4907]: I0127 18:46:15.759721 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:15 crc kubenswrapper[4907]: E0127 18:46:15.760749 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:22 crc kubenswrapper[4907]: I0127 18:46:22.617312 4907 generic.go:334] "Generic (PLEG): container finished" podID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerID="bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793" exitCode=0 Jan 27 18:46:22 crc kubenswrapper[4907]: I0127 18:46:22.617371 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerDied","Data":"bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793"} Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.176657 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281277 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281409 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281516 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281668 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281777 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.290730 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.291077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn" (OuterVolumeSpecName: "kube-api-access-85wvn") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "kube-api-access-85wvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.321688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.323736 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.323818 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.325927 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory" (OuterVolumeSpecName: "inventory") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384323 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384355 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384365 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384375 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384386 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384396 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.638833 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerDied","Data":"e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1"} Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.638874 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.639198 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805002 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:24 crc kubenswrapper[4907]: E0127 18:46:24.805576 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805594 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805848 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.806758 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809451 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809508 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809657 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809686 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.810019 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.815449 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.894978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.895690 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.895914 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.896078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.896331 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999207 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999594 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999647 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.004325 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.006036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.007576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.008131 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.034381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.124220 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.680932 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:26 crc kubenswrapper[4907]: I0127 18:46:26.658598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerStarted","Data":"bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261"} Jan 27 18:46:27 crc kubenswrapper[4907]: I0127 18:46:27.749204 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:27 crc kubenswrapper[4907]: E0127 18:46:27.750118 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:28 crc kubenswrapper[4907]: I0127 18:46:28.683509 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerStarted","Data":"2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a"} Jan 27 18:46:28 crc kubenswrapper[4907]: I0127 18:46:28.710249 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" podStartSLOduration=2.728148676 podStartE2EDuration="4.710230443s" podCreationTimestamp="2026-01-27 18:46:24 +0000 UTC" firstStartedPulling="2026-01-27 18:46:25.684237495 +0000 UTC m=+2440.813520117" lastFinishedPulling="2026-01-27 18:46:27.666319272 +0000 UTC m=+2442.795601884" observedRunningTime="2026-01-27 18:46:28.700079744 +0000 UTC m=+2443.829362366" watchObservedRunningTime="2026-01-27 18:46:28.710230443 +0000 UTC m=+2443.839513055" Jan 27 18:46:38 crc kubenswrapper[4907]: I0127 18:46:38.747830 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:38 crc kubenswrapper[4907]: E0127 18:46:38.748664 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:50 crc kubenswrapper[4907]: I0127 18:46:50.749382 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:50 crc kubenswrapper[4907]: E0127 18:46:50.750140 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:01 crc kubenswrapper[4907]: I0127 18:47:01.751152 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:01 crc kubenswrapper[4907]: E0127 18:47:01.752243 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:16 crc kubenswrapper[4907]: I0127 18:47:16.748776 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:16 crc kubenswrapper[4907]: E0127 18:47:16.749364 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:28 crc kubenswrapper[4907]: I0127 18:47:28.748264 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:28 crc kubenswrapper[4907]: E0127 18:47:28.749118 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:39 crc kubenswrapper[4907]: I0127 18:47:39.748912 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:39 crc kubenswrapper[4907]: E0127 18:47:39.749968 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:51 crc kubenswrapper[4907]: I0127 18:47:51.748311 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:51 crc kubenswrapper[4907]: E0127 18:47:51.749624 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:03 crc kubenswrapper[4907]: I0127 18:48:03.748019 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:03 crc kubenswrapper[4907]: E0127 18:48:03.748969 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:17 crc kubenswrapper[4907]: I0127 18:48:17.749176 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:17 crc kubenswrapper[4907]: E0127 18:48:17.750699 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:28 crc kubenswrapper[4907]: I0127 18:48:28.749887 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:29 crc kubenswrapper[4907]: I0127 18:48:29.065415 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} Jan 27 18:50:56 crc kubenswrapper[4907]: I0127 18:50:56.521514 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:50:56 crc kubenswrapper[4907]: I0127 18:50:56.522215 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.586206 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.589481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.605317 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712045 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712128 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712286 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816463 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.817323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.838121 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.926163 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:00 crc kubenswrapper[4907]: W0127 18:51:00.537126 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8f3d6c_3ea2_41ea_ac13_b12933bf4878.slice/crio-34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d WatchSource:0}: Error finding container 34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d: Status 404 returned error can't find the container with id 34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d Jan 27 18:51:00 crc kubenswrapper[4907]: I0127 18:51:00.538385 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:00 crc kubenswrapper[4907]: I0127 18:51:00.727423 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d"} Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.744063 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" exitCode=0 Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.744139 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd"} Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.751095 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:51:03 crc kubenswrapper[4907]: I0127 18:51:03.862438 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} Jan 27 18:51:05 crc kubenswrapper[4907]: I0127 18:51:05.886331 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" exitCode=0 Jan 27 18:51:05 crc kubenswrapper[4907]: I0127 18:51:05.886414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} Jan 27 18:51:06 crc kubenswrapper[4907]: I0127 18:51:06.934107 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} Jan 27 18:51:06 crc kubenswrapper[4907]: I0127 18:51:06.958107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wp8d9" podStartSLOduration=3.421455706 podStartE2EDuration="7.958085431s" podCreationTimestamp="2026-01-27 18:50:59 +0000 UTC" firstStartedPulling="2026-01-27 18:51:01.750696059 +0000 UTC m=+2716.879978671" lastFinishedPulling="2026-01-27 18:51:06.287325784 +0000 UTC m=+2721.416608396" observedRunningTime="2026-01-27 18:51:06.952627026 +0000 UTC m=+2722.081909658" watchObservedRunningTime="2026-01-27 18:51:06.958085431 +0000 UTC m=+2722.087368033" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.927741 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.928296 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.977169 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:19 crc kubenswrapper[4907]: I0127 18:51:19.984394 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.051709 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.082276 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wp8d9" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" containerID="cri-o://4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" gracePeriod=2 Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.666629 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738497 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738773 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738890 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.740843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities" (OuterVolumeSpecName: "utilities") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.746449 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p" (OuterVolumeSpecName: "kube-api-access-m6h2p") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "kube-api-access-m6h2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.799165 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843582 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843969 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843986 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100282 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" exitCode=0 Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d"} Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100377 4907 scope.go:117] "RemoveContainer" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100382 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.139677 4907 scope.go:117] "RemoveContainer" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.149068 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.161073 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.177587 4907 scope.go:117] "RemoveContainer" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.239424 4907 scope.go:117] "RemoveContainer" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240030 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": container with ID starting with 4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19 not found: ID does not exist" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240075 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} err="failed to get container status \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": rpc error: code = NotFound desc = could not find container \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": container with ID starting with 4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19 not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240100 4907 scope.go:117] "RemoveContainer" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240415 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": container with ID starting with 0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb not found: ID does not exist" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240448 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} err="failed to get container status \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": rpc error: code = NotFound desc = could not find container \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": container with ID starting with 0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240487 4907 scope.go:117] "RemoveContainer" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240734 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": container with ID starting with d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd not found: ID does not exist" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240756 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd"} err="failed to get container status \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": rpc error: code = NotFound desc = could not find container \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": container with ID starting with d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.767546 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" path="/var/lib/kubelet/pods/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878/volumes" Jan 27 18:51:26 crc kubenswrapper[4907]: I0127 18:51:26.521234 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:26 crc kubenswrapper[4907]: I0127 18:51:26.521824 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.605846 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607329 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607395 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-utilities" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607405 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-utilities" Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607413 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-content" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607424 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-content" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.610246 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.622911 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682454 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785307 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.786520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.786618 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.815310 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.934592 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:55 crc kubenswrapper[4907]: I0127 18:51:55.419950 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:55 crc kubenswrapper[4907]: I0127 18:51:55.453920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"d030af88e069347ddd4b826a8d4f64433f67d574c407eadd28610f00671b7b2e"} Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855321 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855624 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855672 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.856676 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.856734 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" gracePeriod=600 Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.901277 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" exitCode=0 Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.901663 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994322 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" exitCode=0 Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994412 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994940 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.997601 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} Jan 27 18:52:05 crc kubenswrapper[4907]: I0127 18:52:05.184661 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" exitCode=0 Jan 27 18:52:05 crc kubenswrapper[4907]: I0127 18:52:05.184714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} Jan 27 18:52:06 crc kubenswrapper[4907]: I0127 18:52:06.200120 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} Jan 27 18:52:06 crc kubenswrapper[4907]: I0127 18:52:06.224666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xgqxb" podStartSLOduration=3.500593011 podStartE2EDuration="12.224648902s" podCreationTimestamp="2026-01-27 18:51:54 +0000 UTC" firstStartedPulling="2026-01-27 18:51:56.905052034 +0000 UTC m=+2772.034334646" lastFinishedPulling="2026-01-27 18:52:05.629107935 +0000 UTC m=+2780.758390537" observedRunningTime="2026-01-27 18:52:06.220966938 +0000 UTC m=+2781.350249560" watchObservedRunningTime="2026-01-27 18:52:06.224648902 +0000 UTC m=+2781.353931514" Jan 27 18:52:14 crc kubenswrapper[4907]: I0127 18:52:14.934764 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:14 crc kubenswrapper[4907]: I0127 18:52:14.935713 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.539169 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.543324 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.573165 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734456 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836532 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836658 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836705 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.837129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.837386 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.857584 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.869911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:16 crc kubenswrapper[4907]: I0127 18:52:16.024625 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:52:16 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:52:16 crc kubenswrapper[4907]: > Jan 27 18:52:16 crc kubenswrapper[4907]: I0127 18:52:16.405393 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326150 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" exitCode=0 Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326200 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a"} Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326680 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"7f8195908aa09dec9904b1e2fcfaf41d8cc54cfdf36b2b2802d16bd2dd4d46fd"} Jan 27 18:52:18 crc kubenswrapper[4907]: I0127 18:52:18.337786 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} Jan 27 18:52:20 crc kubenswrapper[4907]: I0127 18:52:20.385688 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" exitCode=0 Jan 27 18:52:20 crc kubenswrapper[4907]: I0127 18:52:20.385737 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} Jan 27 18:52:21 crc kubenswrapper[4907]: I0127 18:52:21.399225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} Jan 27 18:52:21 crc kubenswrapper[4907]: I0127 18:52:21.428162 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9dj2r" podStartSLOduration=2.929622184 podStartE2EDuration="6.428135257s" podCreationTimestamp="2026-01-27 18:52:15 +0000 UTC" firstStartedPulling="2026-01-27 18:52:17.329373384 +0000 UTC m=+2792.458655986" lastFinishedPulling="2026-01-27 18:52:20.827886437 +0000 UTC m=+2795.957169059" observedRunningTime="2026-01-27 18:52:21.418501725 +0000 UTC m=+2796.547784337" watchObservedRunningTime="2026-01-27 18:52:21.428135257 +0000 UTC m=+2796.557417869" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.870787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.871425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.919778 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.989596 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:52:25 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:52:25 crc kubenswrapper[4907]: > Jan 27 18:52:26 crc kubenswrapper[4907]: I0127 18:52:26.537468 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:26 crc kubenswrapper[4907]: I0127 18:52:26.590419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:28 crc kubenswrapper[4907]: I0127 18:52:28.490587 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9dj2r" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" containerID="cri-o://5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" gracePeriod=2 Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.023449 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.183898 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184241 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184857 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities" (OuterVolumeSpecName: "utilities") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.190242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g" (OuterVolumeSpecName: "kube-api-access-29m5g") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "kube-api-access-29m5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.211312 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.286981 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.287015 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.287026 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.501938 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" exitCode=0 Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.501977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502002 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502010 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"7f8195908aa09dec9904b1e2fcfaf41d8cc54cfdf36b2b2802d16bd2dd4d46fd"} Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502044 4907 scope.go:117] "RemoveContainer" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.529130 4907 scope.go:117] "RemoveContainer" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.543048 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.559785 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.571725 4907 scope.go:117] "RemoveContainer" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642184 4907 scope.go:117] "RemoveContainer" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.642718 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": container with ID starting with 5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4 not found: ID does not exist" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642759 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} err="failed to get container status \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": rpc error: code = NotFound desc = could not find container \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": container with ID starting with 5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4 not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642785 4907 scope.go:117] "RemoveContainer" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.643192 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": container with ID starting with edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0 not found: ID does not exist" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643237 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} err="failed to get container status \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": rpc error: code = NotFound desc = could not find container \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": container with ID starting with edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0 not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643268 4907 scope.go:117] "RemoveContainer" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.643587 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": container with ID starting with 5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a not found: ID does not exist" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643613 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a"} err="failed to get container status \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": rpc error: code = NotFound desc = could not find container \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": container with ID starting with 5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.759940 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" path="/var/lib/kubelet/pods/5ed1dd13-007f-48ad-9dc9-6870f507c44e/volumes" Jan 27 18:52:34 crc kubenswrapper[4907]: I0127 18:52:34.995581 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:35 crc kubenswrapper[4907]: I0127 18:52:35.051069 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:35 crc kubenswrapper[4907]: I0127 18:52:35.233627 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:36 crc kubenswrapper[4907]: I0127 18:52:36.571046 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" containerID="cri-o://29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" gracePeriod=2 Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.102202 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.272640 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.272905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.273021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.273708 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities" (OuterVolumeSpecName: "utilities") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.283918 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx" (OuterVolumeSpecName: "kube-api-access-pppcx") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "kube-api-access-pppcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.375173 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.375198 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.398308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.477121 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586451 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" exitCode=0 Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586501 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586528 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"d030af88e069347ddd4b826a8d4f64433f67d574c407eadd28610f00671b7b2e"} Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586542 4907 scope.go:117] "RemoveContainer" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586815 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.610037 4907 scope.go:117] "RemoveContainer" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.640904 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.651295 4907 scope.go:117] "RemoveContainer" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.655621 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.707782 4907 scope.go:117] "RemoveContainer" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.708249 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": container with ID starting with 29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987 not found: ID does not exist" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708298 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} err="failed to get container status \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": rpc error: code = NotFound desc = could not find container \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": container with ID starting with 29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708329 4907 scope.go:117] "RemoveContainer" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.708726 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": container with ID starting with 3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22 not found: ID does not exist" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708775 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} err="failed to get container status \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": rpc error: code = NotFound desc = could not find container \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": container with ID starting with 3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708802 4907 scope.go:117] "RemoveContainer" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.709187 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": container with ID starting with e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100 not found: ID does not exist" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.709258 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100"} err="failed to get container status \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": rpc error: code = NotFound desc = could not find container \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": container with ID starting with e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.761892 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" path="/var/lib/kubelet/pods/9c595f48-fb2b-4908-ad96-8607334515b9/volumes" Jan 27 18:52:58 crc kubenswrapper[4907]: I0127 18:52:58.789115 4907 generic.go:334] "Generic (PLEG): container finished" podID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerID="2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a" exitCode=0 Jan 27 18:52:58 crc kubenswrapper[4907]: I0127 18:52:58.789181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerDied","Data":"2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a"} Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.371891 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410700 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410802 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.411005 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.411045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.416690 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.417001 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4" (OuterVolumeSpecName: "kube-api-access-gwtv4") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "kube-api-access-gwtv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.447974 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.465640 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.468786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory" (OuterVolumeSpecName: "inventory") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519385 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519423 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519437 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519446 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519455 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813062 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerDied","Data":"bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261"} Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813111 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953062 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953607 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953623 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953638 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953645 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953665 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953670 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953686 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953693 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953707 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953713 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953731 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953736 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953755 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953761 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953954 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953975 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953990 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.954879 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958364 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958594 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958933 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959043 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959284 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959779 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.981209 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035359 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035414 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035473 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035697 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035756 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.137885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.137948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138086 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138139 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138189 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138247 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.139023 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.144407 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.144536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.145110 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.147452 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.154373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.272212 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.808260 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.824101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerStarted","Data":"0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605"} Jan 27 18:53:03 crc kubenswrapper[4907]: I0127 18:53:03.850677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerStarted","Data":"0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3"} Jan 27 18:53:03 crc kubenswrapper[4907]: I0127 18:53:03.883587 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" podStartSLOduration=3.269607013 podStartE2EDuration="3.883563771s" podCreationTimestamp="2026-01-27 18:53:00 +0000 UTC" firstStartedPulling="2026-01-27 18:53:01.809350188 +0000 UTC m=+2836.938632800" lastFinishedPulling="2026-01-27 18:53:02.423306946 +0000 UTC m=+2837.552589558" observedRunningTime="2026-01-27 18:53:03.871191172 +0000 UTC m=+2839.000473794" watchObservedRunningTime="2026-01-27 18:53:03.883563771 +0000 UTC m=+2839.012846383" Jan 27 18:54:26 crc kubenswrapper[4907]: I0127 18:54:26.521259 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:26 crc kubenswrapper[4907]: I0127 18:54:26.521870 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:54:56 crc kubenswrapper[4907]: I0127 18:54:56.528206 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:56 crc kubenswrapper[4907]: I0127 18:54:56.529966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.520861 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.521437 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.521500 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.522621 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.522677 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" gracePeriod=600 Jan 27 18:55:26 crc kubenswrapper[4907]: E0127 18:55:26.641021 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.401616 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" exitCode=0 Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.401665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.402373 4907 scope.go:117] "RemoveContainer" containerID="5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.403475 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:27 crc kubenswrapper[4907]: E0127 18:55:27.404035 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:30 crc kubenswrapper[4907]: I0127 18:55:30.440504 4907 generic.go:334] "Generic (PLEG): container finished" podID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerID="0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3" exitCode=0 Jan 27 18:55:30 crc kubenswrapper[4907]: I0127 18:55:30.440587 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerDied","Data":"0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3"} Jan 27 18:55:31 crc kubenswrapper[4907]: I0127 18:55:31.900488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009730 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010073 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.017620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.019016 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9" (OuterVolumeSpecName: "kube-api-access-5glf9") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "kube-api-access-5glf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.045978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory" (OuterVolumeSpecName: "inventory") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.046364 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.049177 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.058200 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.058667 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.060977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.065269 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114083 4907 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114128 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114139 4907 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114153 4907 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114163 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114176 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114190 4907 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114201 4907 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114211 4907 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.464932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerDied","Data":"0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605"} Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.464987 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.465047 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.580298 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:32 crc kubenswrapper[4907]: E0127 18:55:32.581255 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.581282 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.581591 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.582614 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588501 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588659 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588819 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588848 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.589030 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.594329 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741112 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741262 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741283 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741348 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741660 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844408 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844437 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.851700 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.851914 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.852177 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853471 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853789 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.863724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.900634 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:33 crc kubenswrapper[4907]: I0127 18:55:33.479946 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:34 crc kubenswrapper[4907]: I0127 18:55:34.492302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerStarted","Data":"95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24"} Jan 27 18:55:34 crc kubenswrapper[4907]: I0127 18:55:34.492955 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerStarted","Data":"04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8"} Jan 27 18:55:35 crc kubenswrapper[4907]: I0127 18:55:35.528295 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" podStartSLOduration=2.963293499 podStartE2EDuration="3.528278562s" podCreationTimestamp="2026-01-27 18:55:32 +0000 UTC" firstStartedPulling="2026-01-27 18:55:33.498554076 +0000 UTC m=+2988.627836688" lastFinishedPulling="2026-01-27 18:55:34.063539129 +0000 UTC m=+2989.192821751" observedRunningTime="2026-01-27 18:55:35.518151895 +0000 UTC m=+2990.647434517" watchObservedRunningTime="2026-01-27 18:55:35.528278562 +0000 UTC m=+2990.657561174" Jan 27 18:55:41 crc kubenswrapper[4907]: I0127 18:55:41.748660 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:41 crc kubenswrapper[4907]: E0127 18:55:41.749494 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:55 crc kubenswrapper[4907]: I0127 18:55:55.767769 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:55 crc kubenswrapper[4907]: E0127 18:55:55.768800 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:08 crc kubenswrapper[4907]: I0127 18:56:08.748907 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:08 crc kubenswrapper[4907]: E0127 18:56:08.751023 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:23 crc kubenswrapper[4907]: I0127 18:56:23.748636 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:23 crc kubenswrapper[4907]: E0127 18:56:23.749569 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:36 crc kubenswrapper[4907]: I0127 18:56:36.748493 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:36 crc kubenswrapper[4907]: E0127 18:56:36.749327 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:49 crc kubenswrapper[4907]: I0127 18:56:49.749018 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:49 crc kubenswrapper[4907]: E0127 18:56:49.749777 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:04 crc kubenswrapper[4907]: I0127 18:57:04.749190 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:04 crc kubenswrapper[4907]: E0127 18:57:04.750158 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:18 crc kubenswrapper[4907]: I0127 18:57:18.749119 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:18 crc kubenswrapper[4907]: E0127 18:57:18.750426 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:29 crc kubenswrapper[4907]: I0127 18:57:29.748775 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:29 crc kubenswrapper[4907]: E0127 18:57:29.749655 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:40 crc kubenswrapper[4907]: I0127 18:57:40.747996 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:40 crc kubenswrapper[4907]: E0127 18:57:40.748870 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:53 crc kubenswrapper[4907]: I0127 18:57:53.749327 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:53 crc kubenswrapper[4907]: E0127 18:57:53.750807 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.237857 4907 generic.go:334] "Generic (PLEG): container finished" podID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerID="95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24" exitCode=0 Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.237963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerDied","Data":"95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24"} Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.748525 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:08 crc kubenswrapper[4907]: E0127 18:58:08.749019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.743748 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.884887 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885214 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885307 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885393 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885436 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.891696 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5" (OuterVolumeSpecName: "kube-api-access-tf5t5") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "kube-api-access-tf5t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.891868 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.922361 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.926761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.928112 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.928978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory" (OuterVolumeSpecName: "inventory") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.931727 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990436 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990487 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990505 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990519 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990532 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990542 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990567 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.289803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerDied","Data":"04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8"} Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.290150 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.290071 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.366755 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:10 crc kubenswrapper[4907]: E0127 18:58:10.367640 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.367666 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.367961 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.369231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.375790 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376078 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376201 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376937 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.379897 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.387070 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.503639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.503710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504095 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504445 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504501 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606820 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606932 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606977 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607088 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.612829 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.612977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.613104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.613615 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.614151 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.619142 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.624371 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.687526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:11 crc kubenswrapper[4907]: I0127 18:58:11.335192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:11 crc kubenswrapper[4907]: I0127 18:58:11.340543 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.311184 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerStarted","Data":"de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23"} Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.312307 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerStarted","Data":"31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390"} Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.348366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" podStartSLOduration=1.827527863 podStartE2EDuration="2.348344368s" podCreationTimestamp="2026-01-27 18:58:10 +0000 UTC" firstStartedPulling="2026-01-27 18:58:11.340339926 +0000 UTC m=+3146.469622538" lastFinishedPulling="2026-01-27 18:58:11.861156441 +0000 UTC m=+3146.990439043" observedRunningTime="2026-01-27 18:58:12.3296353 +0000 UTC m=+3147.458917922" watchObservedRunningTime="2026-01-27 18:58:12.348344368 +0000 UTC m=+3147.477626990" Jan 27 18:58:22 crc kubenswrapper[4907]: I0127 18:58:22.748251 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:22 crc kubenswrapper[4907]: E0127 18:58:22.749204 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:33 crc kubenswrapper[4907]: I0127 18:58:33.747877 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:33 crc kubenswrapper[4907]: E0127 18:58:33.748898 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:44 crc kubenswrapper[4907]: I0127 18:58:44.748461 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:44 crc kubenswrapper[4907]: E0127 18:58:44.749475 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:58 crc kubenswrapper[4907]: I0127 18:58:58.749262 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:58 crc kubenswrapper[4907]: E0127 18:58:58.750199 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:10 crc kubenswrapper[4907]: I0127 18:59:10.748188 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:10 crc kubenswrapper[4907]: E0127 18:59:10.749341 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.917139 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.920794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.954146 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018649 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018725 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018790 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121247 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.122046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.151685 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.251468 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.796240 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:13 crc kubenswrapper[4907]: W0127 18:59:13.809702 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2b1f50_883c_4fe6_8134_b228bae26b02.slice/crio-ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd WatchSource:0}: Error finding container ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd: Status 404 returned error can't find the container with id ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.986202 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd"} Jan 27 18:59:15 crc kubenswrapper[4907]: I0127 18:59:15.000754 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" exitCode=0 Jan 27 18:59:15 crc kubenswrapper[4907]: I0127 18:59:15.000807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478"} Jan 27 18:59:16 crc kubenswrapper[4907]: I0127 18:59:16.013761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} Jan 27 18:59:18 crc kubenswrapper[4907]: I0127 18:59:18.041947 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" exitCode=0 Jan 27 18:59:18 crc kubenswrapper[4907]: I0127 18:59:18.042545 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} Jan 27 18:59:19 crc kubenswrapper[4907]: I0127 18:59:19.057814 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} Jan 27 18:59:19 crc kubenswrapper[4907]: I0127 18:59:19.098572 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhl9t" podStartSLOduration=3.624154463 podStartE2EDuration="7.098530955s" podCreationTimestamp="2026-01-27 18:59:12 +0000 UTC" firstStartedPulling="2026-01-27 18:59:15.003989425 +0000 UTC m=+3210.133272037" lastFinishedPulling="2026-01-27 18:59:18.478365917 +0000 UTC m=+3213.607648529" observedRunningTime="2026-01-27 18:59:19.082478912 +0000 UTC m=+3214.211761524" watchObservedRunningTime="2026-01-27 18:59:19.098530955 +0000 UTC m=+3214.227813557" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.252691 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.253472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.320067 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.748683 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:23 crc kubenswrapper[4907]: E0127 18:59:23.749055 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:24 crc kubenswrapper[4907]: I0127 18:59:24.161972 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:24 crc kubenswrapper[4907]: I0127 18:59:24.233608 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.130546 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhl9t" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" containerID="cri-o://ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" gracePeriod=2 Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.661927 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.670787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.670873 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671729 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities" (OuterVolumeSpecName: "utilities") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671950 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.677114 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p" (OuterVolumeSpecName: "kube-api-access-gf95p") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "kube-api-access-gf95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.774463 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143066 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" exitCode=0 Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143122 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143151 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd"} Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143168 4907 scope.go:117] "RemoveContainer" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.168112 4907 scope.go:117] "RemoveContainer" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.190545 4907 scope.go:117] "RemoveContainer" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.253517 4907 scope.go:117] "RemoveContainer" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.254095 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": container with ID starting with ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21 not found: ID does not exist" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254130 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} err="failed to get container status \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": rpc error: code = NotFound desc = could not find container \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": container with ID starting with ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21 not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254150 4907 scope.go:117] "RemoveContainer" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.254757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": container with ID starting with dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d not found: ID does not exist" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254780 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} err="failed to get container status \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": rpc error: code = NotFound desc = could not find container \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": container with ID starting with dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254794 4907 scope.go:117] "RemoveContainer" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.255166 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": container with ID starting with f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478 not found: ID does not exist" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.255220 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478"} err="failed to get container status \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": rpc error: code = NotFound desc = could not find container \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": container with ID starting with f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478 not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.287754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.290918 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.485684 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.500736 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.762778 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" path="/var/lib/kubelet/pods/df2b1f50-883c-4fe6-8134-b228bae26b02/volumes" Jan 27 18:59:35 crc kubenswrapper[4907]: I0127 18:59:35.756668 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:35 crc kubenswrapper[4907]: E0127 18:59:35.757519 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:46 crc kubenswrapper[4907]: I0127 18:59:46.749300 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:46 crc kubenswrapper[4907]: E0127 18:59:46.750111 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:59 crc kubenswrapper[4907]: I0127 18:59:59.747947 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:59 crc kubenswrapper[4907]: E0127 18:59:59.749874 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164041 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164911 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164929 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164940 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164948 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164973 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164979 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.165244 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.166103 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.168322 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.168934 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.179678 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.247685 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.248088 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.248149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350083 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350329 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.351044 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.356680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.369247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.495929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.003503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.588531 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerStarted","Data":"730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08"} Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.589004 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerStarted","Data":"969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8"} Jan 27 19:00:02 crc kubenswrapper[4907]: I0127 19:00:02.612719 4907 generic.go:334] "Generic (PLEG): container finished" podID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerID="730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[4907]: I0127 19:00:02.612765 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerDied","Data":"730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08"} Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.049141 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.139814 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140184 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140240 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140866 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.141246 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.145845 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.145862 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f" (OuterVolumeSpecName: "kube-api-access-c8l5f") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "kube-api-access-c8l5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.243364 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.243406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.625773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerDied","Data":"969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8"} Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.626097 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.625841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:04 crc kubenswrapper[4907]: I0127 19:00:04.123922 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 19:00:04 crc kubenswrapper[4907]: I0127 19:00:04.134522 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 19:00:05 crc kubenswrapper[4907]: I0127 19:00:05.779107 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" path="/var/lib/kubelet/pods/98eb00a2-9da3-459d-b011-7d92bcd6ed21/volumes" Jan 27 19:00:11 crc kubenswrapper[4907]: I0127 19:00:11.748884 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:00:11 crc kubenswrapper[4907]: E0127 19:00:11.750090 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:00:20 crc kubenswrapper[4907]: I0127 19:00:20.811203 4907 generic.go:334] "Generic (PLEG): container finished" podID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerID="de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23" exitCode=0 Jan 27 19:00:20 crc kubenswrapper[4907]: I0127 19:00:20.811255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerDied","Data":"de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23"} Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.382358 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561794 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561827 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561936 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561969 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.562023 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.562069 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.567882 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.568102 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8" (OuterVolumeSpecName: "kube-api-access-7pfg8") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "kube-api-access-7pfg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.595136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory" (OuterVolumeSpecName: "inventory") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.595413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.596314 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.600251 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.601865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666335 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666380 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666394 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666408 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666427 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666440 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666453 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834402 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerDied","Data":"31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390"} Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834451 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834456 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940288 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:22 crc kubenswrapper[4907]: E0127 19:00:22.940863 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940885 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: E0127 19:00:22.940912 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.941158 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.941180 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.942037 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945114 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945361 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945618 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945815 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.954358 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077696 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.078105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.078167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181114 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181851 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.186833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.187375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.187374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.188244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.202022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.277751 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.848877 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.864065 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerStarted","Data":"3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750"} Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.864788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerStarted","Data":"ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8"} Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.891742 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" podStartSLOduration=2.459446189 podStartE2EDuration="2.891716926s" podCreationTimestamp="2026-01-27 19:00:22 +0000 UTC" firstStartedPulling="2026-01-27 19:00:23.850586539 +0000 UTC m=+3278.979869151" lastFinishedPulling="2026-01-27 19:00:24.282857276 +0000 UTC m=+3279.412139888" observedRunningTime="2026-01-27 19:00:24.879542762 +0000 UTC m=+3280.008825364" watchObservedRunningTime="2026-01-27 19:00:24.891716926 +0000 UTC m=+3280.020999538" Jan 27 19:00:26 crc kubenswrapper[4907]: I0127 19:00:26.748277 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:00:27 crc kubenswrapper[4907]: I0127 19:00:27.901315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} Jan 27 19:00:42 crc kubenswrapper[4907]: I0127 19:00:42.061637 4907 generic.go:334] "Generic (PLEG): container finished" podID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerID="3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750" exitCode=0 Jan 27 19:00:42 crc kubenswrapper[4907]: I0127 19:00:42.061719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerDied","Data":"3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750"} Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.637834 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801598 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801840 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801893 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.807979 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt" (OuterVolumeSpecName: "kube-api-access-b5lpt") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "kube-api-access-b5lpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.838108 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.838634 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.839210 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.840514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory" (OuterVolumeSpecName: "inventory") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905030 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905073 4907 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905088 4907 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905104 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerDied","Data":"ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8"} Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085250 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8" Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085549 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.160573 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:00 crc kubenswrapper[4907]: E0127 19:01:00.161758 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.161778 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.162103 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.163136 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.173758 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325387 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325444 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325476 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428260 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.434462 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.434461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.440547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.450726 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.505914 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.666173 4907 scope.go:117] "RemoveContainer" containerID="18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.989088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:01 crc kubenswrapper[4907]: I0127 19:01:01.275519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerStarted","Data":"bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553"} Jan 27 19:01:02 crc kubenswrapper[4907]: I0127 19:01:02.286686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerStarted","Data":"43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064"} Jan 27 19:01:02 crc kubenswrapper[4907]: I0127 19:01:02.305310 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492341-l97qr" podStartSLOduration=2.305295909 podStartE2EDuration="2.305295909s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:02.299179038 +0000 UTC m=+3317.428461650" watchObservedRunningTime="2026-01-27 19:01:02.305295909 +0000 UTC m=+3317.434578521" Jan 27 19:01:06 crc kubenswrapper[4907]: I0127 19:01:06.330934 4907 generic.go:334] "Generic (PLEG): container finished" podID="6e412045-8e45-4718-98e5-17e76c69623a" containerID="43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064" exitCode=0 Jan 27 19:01:06 crc kubenswrapper[4907]: I0127 19:01:06.331052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerDied","Data":"43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064"} Jan 27 19:01:07 crc kubenswrapper[4907]: I0127 19:01:07.902172 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027541 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027947 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.033824 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv" (OuterVolumeSpecName: "kube-api-access-mb9bv") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "kube-api-access-mb9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.036744 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.070077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.100684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data" (OuterVolumeSpecName: "config-data") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131100 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131153 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131166 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131180 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354432 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerDied","Data":"bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553"} Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354482 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354485 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.199197 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:25 crc kubenswrapper[4907]: E0127 19:01:25.201671 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.201790 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.202126 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.204485 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.214000 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282605 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384697 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384780 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.385542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.385808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.405610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.529660 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.187526 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603162 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" exitCode=0 Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863"} Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"17e40b23eac574b1fad07b7158cb8bc7797422af7d2de32e35079705e0b6e787"} Jan 27 19:01:28 crc kubenswrapper[4907]: I0127 19:01:28.634411 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} Jan 27 19:01:30 crc kubenswrapper[4907]: I0127 19:01:30.659103 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" exitCode=0 Jan 27 19:01:30 crc kubenswrapper[4907]: I0127 19:01:30.659219 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} Jan 27 19:01:35 crc kubenswrapper[4907]: I0127 19:01:35.723138 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} Jan 27 19:01:35 crc kubenswrapper[4907]: I0127 19:01:35.746791 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxb6x" podStartSLOduration=2.053541694 podStartE2EDuration="10.746766205s" podCreationTimestamp="2026-01-27 19:01:25 +0000 UTC" firstStartedPulling="2026-01-27 19:01:26.605197877 +0000 UTC m=+3341.734480489" lastFinishedPulling="2026-01-27 19:01:35.298422388 +0000 UTC m=+3350.427705000" observedRunningTime="2026-01-27 19:01:35.742291849 +0000 UTC m=+3350.871574481" watchObservedRunningTime="2026-01-27 19:01:35.746766205 +0000 UTC m=+3350.876048827" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.530407 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.531215 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.618658 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.906571 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.972261 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:47 crc kubenswrapper[4907]: I0127 19:01:47.866014 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxb6x" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" containerID="cri-o://3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" gracePeriod=2 Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.490088 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615938 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.618011 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities" (OuterVolumeSpecName: "utilities") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.623903 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652" (OuterVolumeSpecName: "kube-api-access-5z652") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "kube-api-access-5z652". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.688344 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719288 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719329 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719342 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885798 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" exitCode=0 Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885844 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"17e40b23eac574b1fad07b7158cb8bc7797422af7d2de32e35079705e0b6e787"} Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885891 4907 scope.go:117] "RemoveContainer" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.886082 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.928615 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.929647 4907 scope.go:117] "RemoveContainer" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.941490 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.957108 4907 scope.go:117] "RemoveContainer" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009194 4907 scope.go:117] "RemoveContainer" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.009720 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": container with ID starting with 3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c not found: ID does not exist" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009751 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} err="failed to get container status \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": rpc error: code = NotFound desc = could not find container \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": container with ID starting with 3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009771 4907 scope.go:117] "RemoveContainer" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.010117 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": container with ID starting with a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1 not found: ID does not exist" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010166 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} err="failed to get container status \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": rpc error: code = NotFound desc = could not find container \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": container with ID starting with a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1 not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010199 4907 scope.go:117] "RemoveContainer" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.010492 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": container with ID starting with ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863 not found: ID does not exist" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010524 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863"} err="failed to get container status \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": rpc error: code = NotFound desc = could not find container \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": container with ID starting with ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863 not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.768354 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" path="/var/lib/kubelet/pods/019ce7ce-e01c-4708-a3ea-4e4139158064/volumes" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.146102 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147348 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-utilities" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147366 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-utilities" Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147387 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147394 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147437 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-content" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147445 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-content" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147686 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.149832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.172938 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392321 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392580 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.393195 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.412756 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.473039 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:35 crc kubenswrapper[4907]: I0127 19:02:35.027073 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:35 crc kubenswrapper[4907]: I0127 19:02:35.427773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"4b7bada88b753e16b48cbb3e94d46d3b57b2e824e7653126a28f79fe70c1f648"} Jan 27 19:02:36 crc kubenswrapper[4907]: I0127 19:02:36.456116 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" exitCode=0 Jan 27 19:02:36 crc kubenswrapper[4907]: I0127 19:02:36.456170 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210"} Jan 27 19:02:38 crc kubenswrapper[4907]: I0127 19:02:38.479063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.785516 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.789900 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.799971 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.835884 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.837518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.837684 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939556 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.940228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.940436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.966030 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.108540 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.604822 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" exitCode=0 Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.605134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.086665 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.616678 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" exitCode=0 Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.616785 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.617295 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"3d91b0c7e2dd6ccad025cfb3de7c34c804667f8241cb40d44be97b416557728b"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.623842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.672668 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqk6q" podStartSLOduration=3.070450829 podStartE2EDuration="14.672649436s" podCreationTimestamp="2026-01-27 19:02:34 +0000 UTC" firstStartedPulling="2026-01-27 19:02:36.460589998 +0000 UTC m=+3411.589872610" lastFinishedPulling="2026-01-27 19:02:48.062788585 +0000 UTC m=+3423.192071217" observedRunningTime="2026-01-27 19:02:48.658348385 +0000 UTC m=+3423.787630987" watchObservedRunningTime="2026-01-27 19:02:48.672649436 +0000 UTC m=+3423.801932048" Jan 27 19:02:50 crc kubenswrapper[4907]: I0127 19:02:50.647975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} Jan 27 19:02:51 crc kubenswrapper[4907]: I0127 19:02:51.664460 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" exitCode=0 Jan 27 19:02:51 crc kubenswrapper[4907]: I0127 19:02:51.664664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} Jan 27 19:02:52 crc kubenswrapper[4907]: I0127 19:02:52.680908 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} Jan 27 19:02:52 crc kubenswrapper[4907]: I0127 19:02:52.704968 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whhb5" podStartSLOduration=3.177323084 podStartE2EDuration="6.704946698s" podCreationTimestamp="2026-01-27 19:02:46 +0000 UTC" firstStartedPulling="2026-01-27 19:02:48.61900561 +0000 UTC m=+3423.748288222" lastFinishedPulling="2026-01-27 19:02:52.146629224 +0000 UTC m=+3427.275911836" observedRunningTime="2026-01-27 19:02:52.699599828 +0000 UTC m=+3427.828882440" watchObservedRunningTime="2026-01-27 19:02:52.704946698 +0000 UTC m=+3427.834229310" Jan 27 19:02:54 crc kubenswrapper[4907]: I0127 19:02:54.474736 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:54 crc kubenswrapper[4907]: I0127 19:02:54.475097 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:55 crc kubenswrapper[4907]: I0127 19:02:55.520957 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqk6q" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:02:55 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:02:55 crc kubenswrapper[4907]: > Jan 27 19:02:56 crc kubenswrapper[4907]: I0127 19:02:56.521699 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:02:56 crc kubenswrapper[4907]: I0127 19:02:56.521773 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.109306 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.109397 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.164047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.808521 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.865675 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:59 crc kubenswrapper[4907]: I0127 19:02:59.763377 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whhb5" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" containerID="cri-o://ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" gracePeriod=2 Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.330758 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.476606 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477122 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities" (OuterVolumeSpecName: "utilities") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.478357 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.483680 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj" (OuterVolumeSpecName: "kube-api-access-xh7qj") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "kube-api-access-xh7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.505765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.580338 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.580375 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776411 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" exitCode=0 Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776462 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"3d91b0c7e2dd6ccad025cfb3de7c34c804667f8241cb40d44be97b416557728b"} Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776536 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776545 4907 scope.go:117] "RemoveContainer" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.812442 4907 scope.go:117] "RemoveContainer" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.823402 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.835916 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.839685 4907 scope.go:117] "RemoveContainer" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.911687 4907 scope.go:117] "RemoveContainer" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912090 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": container with ID starting with ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490 not found: ID does not exist" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912123 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} err="failed to get container status \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": rpc error: code = NotFound desc = could not find container \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": container with ID starting with ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490 not found: ID does not exist" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912149 4907 scope.go:117] "RemoveContainer" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912388 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": container with ID starting with 90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005 not found: ID does not exist" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912416 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} err="failed to get container status \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": rpc error: code = NotFound desc = could not find container \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": container with ID starting with 90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005 not found: ID does not exist" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912431 4907 scope.go:117] "RemoveContainer" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912668 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": container with ID starting with 179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a not found: ID does not exist" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912692 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a"} err="failed to get container status \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": rpc error: code = NotFound desc = could not find container \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": container with ID starting with 179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a not found: ID does not exist" Jan 27 19:03:01 crc kubenswrapper[4907]: I0127 19:03:01.762264 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" path="/var/lib/kubelet/pods/35a463a4-55b4-4c04-a4f4-ad5b13691a68/volumes" Jan 27 19:03:04 crc kubenswrapper[4907]: I0127 19:03:04.525531 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:04 crc kubenswrapper[4907]: I0127 19:03:04.576244 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:05 crc kubenswrapper[4907]: I0127 19:03:05.342462 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:05 crc kubenswrapper[4907]: I0127 19:03:05.842134 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqk6q" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" containerID="cri-o://6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" gracePeriod=2 Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.393953 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438494 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.446221 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw" (OuterVolumeSpecName: "kube-api-access-z97xw") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "kube-api-access-z97xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.446725 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities" (OuterVolumeSpecName: "utilities") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.580186 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.580229 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.709029 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.787680 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856360 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" exitCode=0 Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856442 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856804 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"4b7bada88b753e16b48cbb3e94d46d3b57b2e824e7653126a28f79fe70c1f648"} Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856821 4907 scope.go:117] "RemoveContainer" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.878765 4907 scope.go:117] "RemoveContainer" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.914544 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.930168 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.937165 4907 scope.go:117] "RemoveContainer" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974282 4907 scope.go:117] "RemoveContainer" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.974820 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": container with ID starting with 6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2 not found: ID does not exist" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974879 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} err="failed to get container status \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": rpc error: code = NotFound desc = could not find container \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": container with ID starting with 6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2 not found: ID does not exist" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974906 4907 scope.go:117] "RemoveContainer" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.975485 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": container with ID starting with 9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e not found: ID does not exist" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.975536 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} err="failed to get container status \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": rpc error: code = NotFound desc = could not find container \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": container with ID starting with 9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e not found: ID does not exist" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.975644 4907 scope.go:117] "RemoveContainer" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.976210 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": container with ID starting with 3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210 not found: ID does not exist" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.976245 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210"} err="failed to get container status \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": rpc error: code = NotFound desc = could not find container \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": container with ID starting with 3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210 not found: ID does not exist" Jan 27 19:03:07 crc kubenswrapper[4907]: I0127 19:03:07.762455 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" path="/var/lib/kubelet/pods/96cfd20e-3418-4124-ad85-e794d2fad77d/volumes" Jan 27 19:03:26 crc kubenswrapper[4907]: I0127 19:03:26.522275 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:03:26 crc kubenswrapper[4907]: I0127 19:03:26.523105 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.521384 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.522441 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.522559 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.524133 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.524256 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" gracePeriod=600 Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415406 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" exitCode=0 Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415857 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415897 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:05:56 crc kubenswrapper[4907]: I0127 19:05:56.536241 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:05:56 crc kubenswrapper[4907]: I0127 19:05:56.537223 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:26 crc kubenswrapper[4907]: I0127 19:06:26.521774 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:06:26 crc kubenswrapper[4907]: I0127 19:06:26.522441 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521294 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521711 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.522780 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.522846 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" gracePeriod=600 Jan 27 19:06:56 crc kubenswrapper[4907]: E0127 19:06:56.650834 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598264 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" exitCode=0 Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598349 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598700 4907 scope.go:117] "RemoveContainer" containerID="2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.599581 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:06:57 crc kubenswrapper[4907]: E0127 19:06:57.600016 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:07 crc kubenswrapper[4907]: I0127 19:07:07.753397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:07 crc kubenswrapper[4907]: E0127 19:07:07.754235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:18 crc kubenswrapper[4907]: I0127 19:07:18.748451 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:18 crc kubenswrapper[4907]: E0127 19:07:18.749320 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:29 crc kubenswrapper[4907]: I0127 19:07:29.748381 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:29 crc kubenswrapper[4907]: E0127 19:07:29.749794 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:41 crc kubenswrapper[4907]: I0127 19:07:41.750492 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:41 crc kubenswrapper[4907]: E0127 19:07:41.751357 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:53 crc kubenswrapper[4907]: I0127 19:07:53.749352 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:53 crc kubenswrapper[4907]: E0127 19:07:53.750403 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:05 crc kubenswrapper[4907]: I0127 19:08:05.767777 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:05 crc kubenswrapper[4907]: E0127 19:08:05.768792 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:16 crc kubenswrapper[4907]: I0127 19:08:16.748488 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:16 crc kubenswrapper[4907]: E0127 19:08:16.749384 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:30 crc kubenswrapper[4907]: I0127 19:08:30.749245 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:30 crc kubenswrapper[4907]: E0127 19:08:30.750205 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:43 crc kubenswrapper[4907]: I0127 19:08:43.748835 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:43 crc kubenswrapper[4907]: E0127 19:08:43.749493 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:56 crc kubenswrapper[4907]: I0127 19:08:56.750029 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:56 crc kubenswrapper[4907]: E0127 19:08:56.751343 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:09 crc kubenswrapper[4907]: I0127 19:09:09.758983 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:09 crc kubenswrapper[4907]: E0127 19:09:09.760615 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:23 crc kubenswrapper[4907]: I0127 19:09:23.748197 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:23 crc kubenswrapper[4907]: E0127 19:09:23.749160 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:38 crc kubenswrapper[4907]: I0127 19:09:38.748798 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:38 crc kubenswrapper[4907]: E0127 19:09:38.749539 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:49 crc kubenswrapper[4907]: I0127 19:09:49.748826 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:49 crc kubenswrapper[4907]: E0127 19:09:49.750060 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:02 crc kubenswrapper[4907]: I0127 19:10:02.747870 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:02 crc kubenswrapper[4907]: E0127 19:10:02.748659 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:15 crc kubenswrapper[4907]: I0127 19:10:15.764092 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:15 crc kubenswrapper[4907]: E0127 19:10:15.765141 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:26 crc kubenswrapper[4907]: I0127 19:10:26.750357 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:26 crc kubenswrapper[4907]: E0127 19:10:26.751303 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:40 crc kubenswrapper[4907]: I0127 19:10:40.748404 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:40 crc kubenswrapper[4907]: E0127 19:10:40.750540 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:53 crc kubenswrapper[4907]: I0127 19:10:53.754873 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:53 crc kubenswrapper[4907]: E0127 19:10:53.755615 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:08 crc kubenswrapper[4907]: I0127 19:11:08.748397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:08 crc kubenswrapper[4907]: E0127 19:11:08.750916 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:19 crc kubenswrapper[4907]: I0127 19:11:19.757893 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:19 crc kubenswrapper[4907]: E0127 19:11:19.759385 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:31 crc kubenswrapper[4907]: I0127 19:11:31.749065 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:31 crc kubenswrapper[4907]: E0127 19:11:31.749935 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:46 crc kubenswrapper[4907]: I0127 19:11:46.748625 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:46 crc kubenswrapper[4907]: E0127 19:11:46.749419 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:58 crc kubenswrapper[4907]: I0127 19:11:58.749933 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:59 crc kubenswrapper[4907]: I0127 19:11:59.294231 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.950064 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.950970 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.950983 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951021 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951027 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951037 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951043 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951058 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951065 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951086 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951092 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951104 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951109 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951324 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951338 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.953566 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.967362 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138173 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241437 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241959 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.242038 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.268370 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.283313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:35 crc kubenswrapper[4907]: I0127 19:12:35.006210 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:35 crc kubenswrapper[4907]: W0127 19:12:35.553681 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9d40db_3b3f_4272_a686_47234c2aa239.slice/crio-1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed WatchSource:0}: Error finding container 1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed: Status 404 returned error can't find the container with id 1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed Jan 27 19:12:35 crc kubenswrapper[4907]: I0127 19:12:35.716341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed"} Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.731060 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" exitCode=0 Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.731134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5"} Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.734050 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:12:38 crc kubenswrapper[4907]: I0127 19:12:38.764084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} Jan 27 19:12:39 crc kubenswrapper[4907]: I0127 19:12:39.776891 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" exitCode=0 Jan 27 19:12:39 crc kubenswrapper[4907]: I0127 19:12:39.776945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} Jan 27 19:12:40 crc kubenswrapper[4907]: I0127 19:12:40.788506 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} Jan 27 19:12:40 crc kubenswrapper[4907]: I0127 19:12:40.814850 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2bdb" podStartSLOduration=4.355952375 podStartE2EDuration="7.814827694s" podCreationTimestamp="2026-01-27 19:12:33 +0000 UTC" firstStartedPulling="2026-01-27 19:12:36.733847107 +0000 UTC m=+4011.863129719" lastFinishedPulling="2026-01-27 19:12:40.192722426 +0000 UTC m=+4015.322005038" observedRunningTime="2026-01-27 19:12:40.80507285 +0000 UTC m=+4015.934355462" watchObservedRunningTime="2026-01-27 19:12:40.814827694 +0000 UTC m=+4015.944110306" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.283694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.284301 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.343028 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.333948 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.389117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.932455 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2bdb" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" containerID="cri-o://5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" gracePeriod=2 Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.675591 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.879785 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.879836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.880141 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.882715 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities" (OuterVolumeSpecName: "utilities") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.887603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj" (OuterVolumeSpecName: "kube-api-access-l99sj") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "kube-api-access-l99sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948537 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" exitCode=0 Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948649 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948736 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948816 4907 scope.go:117] "RemoveContainer" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948752 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed"} Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.960646 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.975363 4907 scope.go:117] "RemoveContainer" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984184 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984242 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984257 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.005913 4907 scope.go:117] "RemoveContainer" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.056973 4907 scope.go:117] "RemoveContainer" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.071248 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": container with ID starting with 5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79 not found: ID does not exist" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071322 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} err="failed to get container status \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": rpc error: code = NotFound desc = could not find container \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": container with ID starting with 5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071359 4907 scope.go:117] "RemoveContainer" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.071916 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": container with ID starting with 3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6 not found: ID does not exist" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071955 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} err="failed to get container status \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": rpc error: code = NotFound desc = could not find container \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": container with ID starting with 3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071982 4907 scope.go:117] "RemoveContainer" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.072306 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": container with ID starting with bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5 not found: ID does not exist" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.072348 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5"} err="failed to get container status \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": rpc error: code = NotFound desc = could not find container \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": container with ID starting with bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.303063 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.317844 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:57 crc kubenswrapper[4907]: I0127 19:12:57.759402 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" path="/var/lib/kubelet/pods/3d9d40db-3b3f-4272-a686-47234c2aa239/volumes" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.618654 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640780 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-content" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640824 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-content" Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640849 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640855 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640960 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-utilities" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640967 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-utilities" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.642064 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.645718 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.668419 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713764 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713820 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816202 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816232 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816978 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.817034 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.949636 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:51 crc kubenswrapper[4907]: I0127 19:13:51.046586 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:51 crc kubenswrapper[4907]: I0127 19:13:51.597164 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542214 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" exitCode=0 Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542281 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4"} Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"c9951ffb29d61c93df43d7bd0f43abc0771ff6370be8ef692b11392d4241602c"} Jan 27 19:13:54 crc kubenswrapper[4907]: I0127 19:13:54.573032 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} Jan 27 19:13:59 crc kubenswrapper[4907]: I0127 19:13:59.715504 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" exitCode=0 Jan 27 19:13:59 crc kubenswrapper[4907]: I0127 19:13:59.716207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} Jan 27 19:14:00 crc kubenswrapper[4907]: I0127 19:14:00.728242 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} Jan 27 19:14:00 crc kubenswrapper[4907]: I0127 19:14:00.754347 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv8q2" podStartSLOduration=3.027383903 podStartE2EDuration="10.754324308s" podCreationTimestamp="2026-01-27 19:13:50 +0000 UTC" firstStartedPulling="2026-01-27 19:13:52.544509332 +0000 UTC m=+4087.673791944" lastFinishedPulling="2026-01-27 19:14:00.271449737 +0000 UTC m=+4095.400732349" observedRunningTime="2026-01-27 19:14:00.745380316 +0000 UTC m=+4095.874662938" watchObservedRunningTime="2026-01-27 19:14:00.754324308 +0000 UTC m=+4095.883606930" Jan 27 19:14:01 crc kubenswrapper[4907]: I0127 19:14:01.047835 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:01 crc kubenswrapper[4907]: I0127 19:14:01.048200 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:02 crc kubenswrapper[4907]: I0127 19:14:02.111254 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:02 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:02 crc kubenswrapper[4907]: > Jan 27 19:14:12 crc kubenswrapper[4907]: I0127 19:14:12.107314 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:12 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:12 crc kubenswrapper[4907]: > Jan 27 19:14:22 crc kubenswrapper[4907]: I0127 19:14:22.103098 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:22 crc kubenswrapper[4907]: > Jan 27 19:14:26 crc kubenswrapper[4907]: I0127 19:14:26.524130 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:14:26 crc kubenswrapper[4907]: I0127 19:14:26.524675 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.095908 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.155121 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.334757 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.081216 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" containerID="cri-o://5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" gracePeriod=2 Jan 27 19:14:33 crc kubenswrapper[4907]: E0127 19:14:33.276696 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef84ef6f_caca_4d3d_a89c_689d9183ee8d.slice/crio-5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.662473 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.736601 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.750931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.750991 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.755384 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities" (OuterVolumeSpecName: "utilities") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.771948 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt" (OuterVolumeSpecName: "kube-api-access-gnlnt") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "kube-api-access-gnlnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.854779 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.854809 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.886196 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.956824 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.096057 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" exitCode=0 Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.096189 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097172 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097425 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"c9951ffb29d61c93df43d7bd0f43abc0771ff6370be8ef692b11392d4241602c"} Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097494 4907 scope.go:117] "RemoveContainer" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.129502 4907 scope.go:117] "RemoveContainer" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.151055 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.169471 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.567072 4907 scope.go:117] "RemoveContainer" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626252 4907 scope.go:117] "RemoveContainer" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.626794 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": container with ID starting with 5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec not found: ID does not exist" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626845 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} err="failed to get container status \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": rpc error: code = NotFound desc = could not find container \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": container with ID starting with 5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec not found: ID does not exist" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626876 4907 scope.go:117] "RemoveContainer" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.627194 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": container with ID starting with 1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854 not found: ID does not exist" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627218 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} err="failed to get container status \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": rpc error: code = NotFound desc = could not find container \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": container with ID starting with 1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854 not found: ID does not exist" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627233 4907 scope.go:117] "RemoveContainer" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.627442 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": container with ID starting with 1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4 not found: ID does not exist" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627463 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4"} err="failed to get container status \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": rpc error: code = NotFound desc = could not find container \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": container with ID starting with 1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4 not found: ID does not exist" Jan 27 19:14:35 crc kubenswrapper[4907]: I0127 19:14:35.763625 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" path="/var/lib/kubelet/pods/ef84ef6f-caca-4d3d-a89c-689d9183ee8d/volumes" Jan 27 19:14:56 crc kubenswrapper[4907]: I0127 19:14:56.521420 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:14:56 crc kubenswrapper[4907]: I0127 19:14:56.522039 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.201238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202434 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202453 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202508 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202517 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202538 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202551 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202896 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.204048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.206156 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.206392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.229067 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307500 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307669 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.409897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.410006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.410092 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.411877 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.423406 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.435075 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.527102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:01 crc kubenswrapper[4907]: I0127 19:15:01.599808 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383523 4907 generic.go:334] "Generic (PLEG): container finished" podID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerID="88bf80890e776fa0c9d90ac4dc0c374ca04d5702e03cfe204b051512a22db9f4" exitCode=0 Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerDied","Data":"88bf80890e776fa0c9d90ac4dc0c374ca04d5702e03cfe204b051512a22db9f4"} Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383884 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerStarted","Data":"2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7"} Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.815960 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921296 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921572 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.924053 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.938043 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.938170 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq" (OuterVolumeSpecName: "kube-api-access-ks9kq") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "kube-api-access-ks9kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025638 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025693 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025706 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.407779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerDied","Data":"2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7"} Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.408175 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.407874 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.899805 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.911300 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 19:15:05 crc kubenswrapper[4907]: I0127 19:15:05.766071 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" path="/var/lib/kubelet/pods/e18727dc-e815-4722-bbce-4bfe5a8ee4f2/volumes" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.064536 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:18 crc kubenswrapper[4907]: E0127 19:15:18.065943 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.065965 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.066381 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.069638 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.086361 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.258933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259099 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259489 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259930 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.260360 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.280435 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.404726 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.957970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601439 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" exitCode=0 Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601494 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70"} Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"b5b7155a8a43c9ece4bd431e5222b2faef155ab6cd58f2badefa29db92f07f77"} Jan 27 19:15:20 crc kubenswrapper[4907]: I0127 19:15:20.614607 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.247808 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.250190 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.260309 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.338821 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.338953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.339022 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.441541 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442135 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442308 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442402 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.482468 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.585127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.188191 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.638194 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2" exitCode=0 Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.638314 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2"} Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.639447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"6998dd2a5993f2277280b5abce6a94a092bf7b305f23f23ae12483299cadf9b0"} Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.645585 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" exitCode=0 Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.645624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.668852 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.671472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.727274 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhlp6" podStartSLOduration=3.191903899 podStartE2EDuration="6.727255501s" podCreationTimestamp="2026-01-27 19:15:18 +0000 UTC" firstStartedPulling="2026-01-27 19:15:19.603529214 +0000 UTC m=+4174.732811836" lastFinishedPulling="2026-01-27 19:15:23.138880826 +0000 UTC m=+4178.268163438" observedRunningTime="2026-01-27 19:15:24.711480007 +0000 UTC m=+4179.840762619" watchObservedRunningTime="2026-01-27 19:15:24.727255501 +0000 UTC m=+4179.856538113" Jan 27 19:15:25 crc kubenswrapper[4907]: I0127 19:15:25.684304 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef" exitCode=0 Jan 27 19:15:25 crc kubenswrapper[4907]: I0127 19:15:25.684354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef"} Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.521796 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.522471 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.522697 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.523707 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.523871 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" gracePeriod=600 Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.710734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714228 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" exitCode=0 Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.733287 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crjt4" podStartSLOduration=2.587407649 podStartE2EDuration="6.733270774s" podCreationTimestamp="2026-01-27 19:15:21 +0000 UTC" firstStartedPulling="2026-01-27 19:15:22.64020213 +0000 UTC m=+4177.769484742" lastFinishedPulling="2026-01-27 19:15:26.786065245 +0000 UTC m=+4181.915347867" observedRunningTime="2026-01-27 19:15:27.73239749 +0000 UTC m=+4182.861680122" watchObservedRunningTime="2026-01-27 19:15:27.733270774 +0000 UTC m=+4182.862553386" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.492805 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.493319 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.566369 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.778722 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:30 crc kubenswrapper[4907]: I0127 19:15:30.649347 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:30 crc kubenswrapper[4907]: I0127 19:15:30.750027 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhlp6" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" containerID="cri-o://a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" gracePeriod=2 Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.352706 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.357993 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358238 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358277 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities" (OuterVolumeSpecName: "utilities") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.365385 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf" (OuterVolumeSpecName: "kube-api-access-q4szf") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "kube-api-access-q4szf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.433735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460134 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460166 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460177 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.585591 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.585647 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.642099 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.789675 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" exitCode=0 Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.789926 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.790073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.790845 4907 scope.go:117] "RemoveContainer" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.795662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"b5b7155a8a43c9ece4bd431e5222b2faef155ab6cd58f2badefa29db92f07f77"} Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.834329 4907 scope.go:117] "RemoveContainer" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.852848 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.863679 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.868102 4907 scope.go:117] "RemoveContainer" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.882287 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920347 4907 scope.go:117] "RemoveContainer" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.920784 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": container with ID starting with a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef not found: ID does not exist" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920834 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} err="failed to get container status \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": rpc error: code = NotFound desc = could not find container \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": container with ID starting with a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef not found: ID does not exist" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920865 4907 scope.go:117] "RemoveContainer" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.921348 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": container with ID starting with 6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750 not found: ID does not exist" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921378 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} err="failed to get container status \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": rpc error: code = NotFound desc = could not find container \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": container with ID starting with 6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750 not found: ID does not exist" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921396 4907 scope.go:117] "RemoveContainer" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.921745 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": container with ID starting with 081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70 not found: ID does not exist" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921779 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70"} err="failed to get container status \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": rpc error: code = NotFound desc = could not find container \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": container with ID starting with 081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70 not found: ID does not exist" Jan 27 19:15:33 crc kubenswrapper[4907]: I0127 19:15:33.761391 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" path="/var/lib/kubelet/pods/4ef5aee7-bf46-43d8-9adb-55a7add33715/volumes" Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.039245 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.039510 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crjt4" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" containerID="cri-o://656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" gracePeriod=2 Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.825520 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" exitCode=0 Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.825622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d"} Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.980668 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166520 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166627 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.167904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities" (OuterVolumeSpecName: "utilities") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.172917 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk" (OuterVolumeSpecName: "kube-api-access-gc6vk") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "kube-api-access-gc6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.187582 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269501 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269542 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269579 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"6998dd2a5993f2277280b5abce6a94a092bf7b305f23f23ae12483299cadf9b0"} Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840674 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840866 4907 scope.go:117] "RemoveContainer" containerID="656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.870695 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.871135 4907 scope.go:117] "RemoveContainer" containerID="590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.882176 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.908486 4907 scope.go:117] "RemoveContainer" containerID="3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2" Jan 27 19:15:37 crc kubenswrapper[4907]: I0127 19:15:37.763469 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" path="/var/lib/kubelet/pods/c722dd63-6f6a-4a90-b8dc-f783ea762dee/volumes" Jan 27 19:16:01 crc kubenswrapper[4907]: I0127 19:16:01.218603 4907 scope.go:117] "RemoveContainer" containerID="ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3" Jan 27 19:17:56 crc kubenswrapper[4907]: I0127 19:17:56.521336 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:17:56 crc kubenswrapper[4907]: I0127 19:17:56.522031 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:26 crc kubenswrapper[4907]: I0127 19:18:26.520907 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:26 crc kubenswrapper[4907]: I0127 19:18:26.521521 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:37 crc kubenswrapper[4907]: I0127 19:18:37.812728 4907 trace.go:236] Trace[1086698741]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (27-Jan-2026 19:18:36.649) (total time: 1162ms): Jan 27 19:18:37 crc kubenswrapper[4907]: Trace[1086698741]: [1.162355942s] [1.162355942s] END Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.521460 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.522120 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.522178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.523226 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.523291 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" gracePeriod=600 Jan 27 19:18:57 crc kubenswrapper[4907]: E0127 19:18:57.675270 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165847 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" exitCode=0 Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165888 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165931 4907 scope.go:117] "RemoveContainer" containerID="8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.166748 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:18:58 crc kubenswrapper[4907]: E0127 19:18:58.167095 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:13 crc kubenswrapper[4907]: I0127 19:19:13.750733 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:13 crc kubenswrapper[4907]: E0127 19:19:13.751642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:27 crc kubenswrapper[4907]: I0127 19:19:27.751071 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:27 crc kubenswrapper[4907]: E0127 19:19:27.751878 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:39 crc kubenswrapper[4907]: I0127 19:19:39.748679 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:39 crc kubenswrapper[4907]: E0127 19:19:39.749595 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:53 crc kubenswrapper[4907]: I0127 19:19:53.749757 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:53 crc kubenswrapper[4907]: E0127 19:19:53.751416 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:06 crc kubenswrapper[4907]: I0127 19:20:06.748301 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:06 crc kubenswrapper[4907]: E0127 19:20:06.749177 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:19 crc kubenswrapper[4907]: I0127 19:20:19.748904 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:19 crc kubenswrapper[4907]: E0127 19:20:19.749923 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:30 crc kubenswrapper[4907]: I0127 19:20:30.748427 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:30 crc kubenswrapper[4907]: E0127 19:20:30.749361 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:43 crc kubenswrapper[4907]: I0127 19:20:43.748068 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:43 crc kubenswrapper[4907]: E0127 19:20:43.748863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:57 crc kubenswrapper[4907]: I0127 19:20:56.998752 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:57 crc kubenswrapper[4907]: E0127 19:20:57.072535 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:09 crc kubenswrapper[4907]: I0127 19:21:09.748160 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:09 crc kubenswrapper[4907]: E0127 19:21:09.749083 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:22 crc kubenswrapper[4907]: I0127 19:21:22.748310 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:22 crc kubenswrapper[4907]: E0127 19:21:22.749076 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:27 crc kubenswrapper[4907]: E0127 19:21:27.079024 4907 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.184:33290->38.102.83.184:45697: read tcp 38.102.83.184:33290->38.102.83.184:45697: read: connection reset by peer Jan 27 19:21:34 crc kubenswrapper[4907]: I0127 19:21:34.748516 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:34 crc kubenswrapper[4907]: E0127 19:21:34.749463 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:45 crc kubenswrapper[4907]: I0127 19:21:45.763523 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:45 crc kubenswrapper[4907]: E0127 19:21:45.765137 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:57 crc kubenswrapper[4907]: I0127 19:21:57.748753 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:57 crc kubenswrapper[4907]: E0127 19:21:57.750738 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:09 crc kubenswrapper[4907]: I0127 19:22:09.748962 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:09 crc kubenswrapper[4907]: E0127 19:22:09.749833 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:21 crc kubenswrapper[4907]: I0127 19:22:21.749330 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:21 crc kubenswrapper[4907]: E0127 19:22:21.750123 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:35 crc kubenswrapper[4907]: I0127 19:22:35.748548 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:35 crc kubenswrapper[4907]: E0127 19:22:35.750648 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.817886 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818868 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818880 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818890 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818898 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818914 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818920 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818938 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818944 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818960 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818967 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818992 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818998 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.819197 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.819225 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.820988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.852527 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.983974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.984055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.984373 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087023 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087173 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.108834 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.146333 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.768055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352054 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" exitCode=0 Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352357 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40"} Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352384 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"f21220de24ed1f78011e22d3e61bccfd658536a3d27cca1de7576fd6efef89ec"} Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.354739 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:22:45 crc kubenswrapper[4907]: I0127 19:22:45.364362 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} Jan 27 19:22:47 crc kubenswrapper[4907]: I0127 19:22:47.386353 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" exitCode=0 Jan 27 19:22:47 crc kubenswrapper[4907]: I0127 19:22:47.386661 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} Jan 27 19:22:48 crc kubenswrapper[4907]: I0127 19:22:48.399143 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} Jan 27 19:22:48 crc kubenswrapper[4907]: I0127 19:22:48.428724 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsfzv" podStartSLOduration=2.943749236 podStartE2EDuration="6.42870355s" podCreationTimestamp="2026-01-27 19:22:42 +0000 UTC" firstStartedPulling="2026-01-27 19:22:44.354530333 +0000 UTC m=+4619.483812945" lastFinishedPulling="2026-01-27 19:22:47.839484657 +0000 UTC m=+4622.968767259" observedRunningTime="2026-01-27 19:22:48.417041471 +0000 UTC m=+4623.546324103" watchObservedRunningTime="2026-01-27 19:22:48.42870355 +0000 UTC m=+4623.557986162" Jan 27 19:22:49 crc kubenswrapper[4907]: I0127 19:22:49.748358 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:49 crc kubenswrapper[4907]: E0127 19:22:49.749137 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.146663 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.147183 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.210660 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.497970 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.545683 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:55 crc kubenswrapper[4907]: I0127 19:22:55.492125 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsfzv" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" containerID="cri-o://fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" gracePeriod=2 Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.011757 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.024949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.025058 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.025220 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.026047 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities" (OuterVolumeSpecName: "utilities") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.032228 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk" (OuterVolumeSpecName: "kube-api-access-hqchk") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "kube-api-access-hqchk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.081265 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127803 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127831 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127842 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510058 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" exitCode=0 Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"f21220de24ed1f78011e22d3e61bccfd658536a3d27cca1de7576fd6efef89ec"} Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510454 4907 scope.go:117] "RemoveContainer" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510164 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.544768 4907 scope.go:117] "RemoveContainer" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.559564 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.570729 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.577540 4907 scope.go:117] "RemoveContainer" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.627453 4907 scope.go:117] "RemoveContainer" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628019 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": container with ID starting with fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9 not found: ID does not exist" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628052 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} err="failed to get container status \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": rpc error: code = NotFound desc = could not find container \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": container with ID starting with fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9 not found: ID does not exist" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628079 4907 scope.go:117] "RemoveContainer" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628421 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": container with ID starting with b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56 not found: ID does not exist" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628448 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} err="failed to get container status \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": rpc error: code = NotFound desc = could not find container \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": container with ID starting with b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56 not found: ID does not exist" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628464 4907 scope.go:117] "RemoveContainer" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": container with ID starting with a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40 not found: ID does not exist" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628803 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40"} err="failed to get container status \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": rpc error: code = NotFound desc = could not find container \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": container with ID starting with a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40 not found: ID does not exist" Jan 27 19:22:57 crc kubenswrapper[4907]: I0127 19:22:57.761973 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" path="/var/lib/kubelet/pods/9db2604e-4039-4a0d-8bf9-f80c51d3df52/volumes" Jan 27 19:23:01 crc kubenswrapper[4907]: I0127 19:23:01.748744 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:01 crc kubenswrapper[4907]: E0127 19:23:01.749588 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:16 crc kubenswrapper[4907]: I0127 19:23:16.748760 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:16 crc kubenswrapper[4907]: E0127 19:23:16.749827 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:27 crc kubenswrapper[4907]: I0127 19:23:27.748237 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:27 crc kubenswrapper[4907]: E0127 19:23:27.749284 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:39 crc kubenswrapper[4907]: I0127 19:23:39.748372 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:39 crc kubenswrapper[4907]: E0127 19:23:39.749327 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:52 crc kubenswrapper[4907]: I0127 19:23:52.748645 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:52 crc kubenswrapper[4907]: E0127 19:23:52.749539 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.274477 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275478 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-utilities" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275491 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-utilities" Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275516 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275522 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275532 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-content" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275539 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-content" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275805 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.277388 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.290117 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369258 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472428 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.473134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.496873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.610951 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:02 crc kubenswrapper[4907]: I0127 19:24:02.271178 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:03 crc kubenswrapper[4907]: I0127 19:24:03.307689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"7a829611836c322ca4f4e5dd10cdaa680f206e6728d3a0c864acdd6c860546a9"} Jan 27 19:24:04 crc kubenswrapper[4907]: I0127 19:24:04.319206 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" exitCode=0 Jan 27 19:24:04 crc kubenswrapper[4907]: I0127 19:24:04.319306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242"} Jan 27 19:24:05 crc kubenswrapper[4907]: I0127 19:24:05.332811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} Jan 27 19:24:07 crc kubenswrapper[4907]: I0127 19:24:07.749420 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:24:08 crc kubenswrapper[4907]: I0127 19:24:08.366277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} Jan 27 19:24:13 crc kubenswrapper[4907]: I0127 19:24:13.423356 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" exitCode=0 Jan 27 19:24:13 crc kubenswrapper[4907]: I0127 19:24:13.423450 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} Jan 27 19:24:16 crc kubenswrapper[4907]: I0127 19:24:16.459544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} Jan 27 19:24:16 crc kubenswrapper[4907]: I0127 19:24:16.488079 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cggpc" podStartSLOduration=4.883858886 podStartE2EDuration="15.488058063s" podCreationTimestamp="2026-01-27 19:24:01 +0000 UTC" firstStartedPulling="2026-01-27 19:24:04.321260847 +0000 UTC m=+4699.450543459" lastFinishedPulling="2026-01-27 19:24:14.925460024 +0000 UTC m=+4710.054742636" observedRunningTime="2026-01-27 19:24:16.477643109 +0000 UTC m=+4711.606925731" watchObservedRunningTime="2026-01-27 19:24:16.488058063 +0000 UTC m=+4711.617340675" Jan 27 19:24:21 crc kubenswrapper[4907]: I0127 19:24:21.611621 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:21 crc kubenswrapper[4907]: I0127 19:24:21.612276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:22 crc kubenswrapper[4907]: I0127 19:24:22.670955 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:22 crc kubenswrapper[4907]: > Jan 27 19:24:32 crc kubenswrapper[4907]: I0127 19:24:32.665704 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:32 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:32 crc kubenswrapper[4907]: > Jan 27 19:24:42 crc kubenswrapper[4907]: I0127 19:24:42.669383 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:42 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:42 crc kubenswrapper[4907]: > Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.687425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.762267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.937317 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:52 crc kubenswrapper[4907]: I0127 19:24:52.911208 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" containerID="cri-o://24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" gracePeriod=2 Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.585740 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.690798 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.691091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.691232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.693008 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities" (OuterVolumeSpecName: "utilities") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.701570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm" (OuterVolumeSpecName: "kube-api-access-jpszm") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "kube-api-access-jpszm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.795873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.795905 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.817054 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.899291 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924746 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" exitCode=0 Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924804 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924823 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.925150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"7a829611836c322ca4f4e5dd10cdaa680f206e6728d3a0c864acdd6c860546a9"} Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.925166 4907 scope.go:117] "RemoveContainer" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.968892 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.978143 4907 scope.go:117] "RemoveContainer" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.979335 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.019575 4907 scope.go:117] "RemoveContainer" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059166 4907 scope.go:117] "RemoveContainer" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.059873 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": container with ID starting with 24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5 not found: ID does not exist" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059933 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} err="failed to get container status \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": rpc error: code = NotFound desc = could not find container \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": container with ID starting with 24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5 not found: ID does not exist" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059995 4907 scope.go:117] "RemoveContainer" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.060407 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": container with ID starting with daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6 not found: ID does not exist" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060450 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} err="failed to get container status \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": rpc error: code = NotFound desc = could not find container \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": container with ID starting with daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6 not found: ID does not exist" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060479 4907 scope.go:117] "RemoveContainer" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.060880 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": container with ID starting with e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242 not found: ID does not exist" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060932 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242"} err="failed to get container status \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": rpc error: code = NotFound desc = could not find container \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": container with ID starting with e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242 not found: ID does not exist" Jan 27 19:24:55 crc kubenswrapper[4907]: I0127 19:24:55.763748 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" path="/var/lib/kubelet/pods/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589/volumes" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.345645 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346674 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346690 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346707 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-utilities" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346715 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-utilities" Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346730 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-content" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346739 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-content" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346959 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.348069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.358433 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359243 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359318 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359316 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5d7cl" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.389016 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438551 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.439002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542113 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542222 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542301 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542381 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542697 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542822 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542932 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.543145 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.550144 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.553333 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.561663 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.565966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.592267 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.680114 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:25:25 crc kubenswrapper[4907]: I0127 19:25:25.189545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:25 crc kubenswrapper[4907]: I0127 19:25:25.277497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerStarted","Data":"e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560"} Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.340751 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.344710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.378546 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.586846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.586947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.587161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.598474 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.598594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.611342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.674188 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:36 crc kubenswrapper[4907]: I0127 19:25:36.912145 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:37 crc kubenswrapper[4907]: I0127 19:25:37.449498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763"} Jan 27 19:25:38 crc kubenswrapper[4907]: I0127 19:25:38.462649 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827" exitCode=0 Jan 27 19:25:38 crc kubenswrapper[4907]: I0127 19:25:38.463021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827"} Jan 27 19:25:38 crc kubenswrapper[4907]: E0127 19:25:38.659461 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8892b4ff_3ac2_4d8d_ac52_b4853cea55b5.slice/crio-conmon-724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8892b4ff_3ac2_4d8d_ac52_b4853cea55b5.slice/crio-724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:25:40 crc kubenswrapper[4907]: I0127 19:25:40.486746 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4"} Jan 27 19:25:42 crc kubenswrapper[4907]: I0127 19:25:42.514393 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4" exitCode=0 Jan 27 19:25:42 crc kubenswrapper[4907]: I0127 19:25:42.514444 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4"} Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097550 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.098277 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097874 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.098215 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097590 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.609340 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.614256 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cd2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(019838dd-5c5f-40f0-a169-09156549d64c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.615589 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="019838dd-5c5f-40f0-a169-09156549d64c" Jan 27 19:26:14 crc kubenswrapper[4907]: I0127 19:26:14.301230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa"} Jan 27 19:26:14 crc kubenswrapper[4907]: E0127 19:26:14.303642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="019838dd-5c5f-40f0-a169-09156549d64c" Jan 27 19:26:14 crc kubenswrapper[4907]: I0127 19:26:14.340983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mswtp" podStartSLOduration=8.854357664 podStartE2EDuration="44.340966325s" podCreationTimestamp="2026-01-27 19:25:30 +0000 UTC" firstStartedPulling="2026-01-27 19:25:38.465765394 +0000 UTC m=+4793.595048006" lastFinishedPulling="2026-01-27 19:26:13.952374055 +0000 UTC m=+4829.081656667" observedRunningTime="2026-01-27 19:26:14.338350181 +0000 UTC m=+4829.467632833" watchObservedRunningTime="2026-01-27 19:26:14.340966325 +0000 UTC m=+4829.470248937" Jan 27 19:26:20 crc kubenswrapper[4907]: I0127 19:26:20.675039 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:20 crc kubenswrapper[4907]: I0127 19:26:20.675688 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:22 crc kubenswrapper[4907]: I0127 19:26:22.296502 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" probeResult="failure" output=< Jan 27 19:26:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:26:22 crc kubenswrapper[4907]: > Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.521104 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.521803 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.640519 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:26:29 crc kubenswrapper[4907]: I0127 19:26:29.457917 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerStarted","Data":"a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8"} Jan 27 19:26:29 crc kubenswrapper[4907]: I0127 19:26:29.492636 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.043851142 podStartE2EDuration="1m6.492616238s" podCreationTimestamp="2026-01-27 19:25:23 +0000 UTC" firstStartedPulling="2026-01-27 19:25:25.188234644 +0000 UTC m=+4780.317517256" lastFinishedPulling="2026-01-27 19:26:26.63699974 +0000 UTC m=+4841.766282352" observedRunningTime="2026-01-27 19:26:29.484102428 +0000 UTC m=+4844.613385040" watchObservedRunningTime="2026-01-27 19:26:29.492616238 +0000 UTC m=+4844.621898850" Jan 27 19:26:31 crc kubenswrapper[4907]: I0127 19:26:31.732004 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" probeResult="failure" output=< Jan 27 19:26:31 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:26:31 crc kubenswrapper[4907]: > Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.314003 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.375689 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.556300 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:42 crc kubenswrapper[4907]: I0127 19:26:42.612952 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" containerID="cri-o://0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" gracePeriod=2 Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.625640 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" exitCode=0 Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.625719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa"} Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.626077 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763"} Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.626090 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.670363 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870292 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870361 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870611 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.872045 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities" (OuterVolumeSpecName: "utilities") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.883623 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n" (OuterVolumeSpecName: "kube-api-access-5xm8n") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "kube-api-access-5xm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.896944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.973751 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.974184 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.974199 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.636059 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.676781 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.687080 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:45 crc kubenswrapper[4907]: I0127 19:26:45.761372 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" path="/var/lib/kubelet/pods/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5/volumes" Jan 27 19:26:56 crc kubenswrapper[4907]: I0127 19:26:56.520979 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:26:56 crc kubenswrapper[4907]: I0127 19:26:56.521765 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.520960 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.521505 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.521589 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.522637 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.522684 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" gracePeriod=600 Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161251 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" exitCode=0 Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161349 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161653 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:27:28 crc kubenswrapper[4907]: I0127 19:27:28.173009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} Jan 27 19:29:56 crc kubenswrapper[4907]: I0127 19:29:56.525041 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:29:56 crc kubenswrapper[4907]: I0127 19:29:56.527052 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733540 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733596 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.734044 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.914799 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.914901 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955721 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955788 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955857 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955914 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955990 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956011 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956164 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956200 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.965472 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.965550 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.206885 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.479077 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podUID="a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678200 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678744 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678237 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678910 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.753330 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.753333 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.987253 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.236849 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.241471 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-utilities" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.241504 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-utilities" Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.241989 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-content" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242000 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-content" Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.242029 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242035 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242327 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.277664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.301588 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.301587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.344038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.345460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.345582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448209 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448282 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.463315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.527716 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.540100 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.645392 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.750600 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.751611 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.159768 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.159805 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.324220 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.443179 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.443179 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.445296 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.445219 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.938424 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.938503 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.399778 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.830192 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.830256 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.465125 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.465447 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674826 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674898 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674903 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674958 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674985 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.675015 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674929 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.675079 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.841887 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006827 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006875 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.007164 4907 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5q5h2 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.007183 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podUID="c53f2859-15de-4c57-81ba-539c7787b649" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006786 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176873 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176967 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176801 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.258800 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.258933 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.314327 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.314404 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341744 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341799 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341864 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341885 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.424628 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.424931 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425114 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425154 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425172 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.597009 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.673720 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.673720 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674076 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674235 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674277 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.753022 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.753029 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755719 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755730 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755758 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755815 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755865 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.756113 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.756808 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.839741 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.006813 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.006817 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171767 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171872 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171896 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171939 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171956 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171994 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172011 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172334 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172368 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172378 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172435 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172439 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172753 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172952 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173005 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173032 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173132 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173200 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173246 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173275 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172744 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394648 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394990 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394742 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.395063 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460093 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460158 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460207 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460239 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.748229 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" podUID="ee97e15a-ebc3-4c61-9841-9c1fb43fdee7" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.020831 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.021239 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112309 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112786 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112385 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112956 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.503143 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.544393 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.544457 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.663531 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.663623 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.694655 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.694655 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.454879 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.455197 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754194 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754367 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754444 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754498 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.793774 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podUID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.224817 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.224879 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.748340 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.752070 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.755161 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.945654 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081621 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081659 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081745 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081905 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110798 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110909 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110921 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.193883 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.193964 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.194589 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313388 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313467 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313575 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313653 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.353644 4907 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zhq64 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.353715 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podUID="bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.693765 4907 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-4ngf2 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.693812 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podUID="70874c1f-da0d-4389-8021-fd3003150fff" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.748679 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.748679 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.749347 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.750519 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.751853 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.756802 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.825030 4907 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-r2fdr container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.825133 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podUID="8f62d8a1-62d1-4206-b061-f75c44ff2450" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.879385 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.907676 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.116708 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.321782 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.394933 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.394999 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.460302 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.460373 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.606218 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.606986 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.607054 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.646764 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.706073 4907 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.706131 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="2448dad5-d0f7-4335-a3fb-a23c5ef59bbf" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.808280 4907 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.808344 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="a9dc6389-0ad3-4259-aaf2-945493e66aa2" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385776 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385859 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385787 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385930 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385795 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385970 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.766255 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" podUID="5f465d65-342c-410f-9374-d8c5ac6f03e0" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.938885 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.938960 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139676 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139743 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139673 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139842 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.793040 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.793716 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887705 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887720 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887797 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887802 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.464782 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.465199 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716760 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716837 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716892 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716961 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717081 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717096 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717131 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717173 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.720535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.720591 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.724286 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.751837 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.757768 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" podUID="e6378a4c-96e5-4151-a0ca-c320fa9b667d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963826 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963894 4907 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5q5h2 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963868 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963939 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963979 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podUID="c53f2859-15de-4c57-81ba-539c7787b649" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963904 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.964058 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.004771 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.087922 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.088014 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.088098 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130772 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130836 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130892 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.171843 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212838 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212903 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212964 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213286 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213214 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213427 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.253890 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.253947 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.254021 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.254133 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.335861 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.376923 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377004 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377329 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377341 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377406 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377352 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377501 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377493 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377539 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377637 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377638 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377699 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.428223 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" containerMessage="Container operator failed liveness probe, will be restarted" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.428594 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" containerID="cri-o://32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679" gracePeriod=30 Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.509845 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podUID="a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673759 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673825 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673939 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678855 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678923 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678946 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.679023 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678977 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.679137 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.691464 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.691528 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" containerID="cri-o://39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8" gracePeriod=30 Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.748870 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799904 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799957 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799975 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800043 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800047 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800071 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800032 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800187 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800237 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800257 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800296 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800326 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.130751 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.130807 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297761 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297823 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297869 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297884 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.394526 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.394656 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418677 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418732 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418783 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469173 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469230 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469617 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.489063 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.681187 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.681260 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.750084 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.754035 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.782115 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.017119 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.017206 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112306 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112343 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112365 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112400 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.172037 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.172108 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.543958 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626734 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626748 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.627066 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626786 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.30:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.627130 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.662878 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.662952 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.663604 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.663682 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.737120 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.737350 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.779608 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.779664 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.456477 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.456654 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.670596 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.670683 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753479 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753515 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753479 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753603 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.790756 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podUID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.133726 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.134015 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.186747 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.227845 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.270275 4907 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xld9m container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.270760 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podUID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.471902 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.471974 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.750168 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.751239 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.753406 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.753445 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.772752 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.984801 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.148750 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.148750 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.231860 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.231942 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232046 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232328 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232647 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.233737 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} pod="metallb-system/frr-k8s-csdnr" containerMessage="Container frr failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.233838 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" containerID="cri-o://1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872" gracePeriod=2 Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.317333 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.317390 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.352456 4907 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zhq64 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.352509 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podUID="bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.693334 4907 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-4ngf2 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.693427 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podUID="70874c1f-da0d-4389-8021-fd3003150fff" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.714678 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.747954 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748008 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748263 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748970 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.750838 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.752472 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.752520 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.754525 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.755831 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.756509 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794405 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794463 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794481 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.796212 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.797436 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" containerID="cri-o://b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5" gracePeriod=30 Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.798191 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.820852 4907 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-r2fdr container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.820913 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podUID="8f62d8a1-62d1-4206-b061-f75c44ff2450" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.878971 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.880040 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.158734 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.159052 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.158746 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.310613 4907 trace.go:236] Trace[2137106824]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (27-Jan-2026 19:30:18.339) (total time: 5963ms): Jan 27 19:30:24 crc kubenswrapper[4907]: Trace[2137106824]: [5.963312054s] [5.963312054s] END Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.310618 4907 trace.go:236] Trace[379575846]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (27-Jan-2026 19:30:14.794) (total time: 9508ms): Jan 27 19:30:24 crc kubenswrapper[4907]: Trace[379575846]: [9.508132295s] [9.508132295s] END Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.343776 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395247 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395307 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395675 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395589 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459593 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459638 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459718 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459650 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.556860 4907 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fqkck container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557258 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podUID="21da9305-e6ab-4378-b316-7a3ffc47faa0" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557094 4907 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fqkck container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557344 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podUID="21da9305-e6ab-4378-b316-7a3ffc47faa0" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.623905 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.623797 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872" exitCode=143 Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624717 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624846 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624878 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624951 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.627391 4907 generic.go:334] "Generic (PLEG): container finished" podID="a733096f-e99d-4186-8542-1d8cb16012d2" containerID="d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd" exitCode=1 Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.627428 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerDied","Data":"d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd"} Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.634549 4907 scope.go:117] "RemoveContainer" containerID="d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.704870 4907 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.704930 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="2448dad5-d0f7-4335-a3fb-a23c5ef59bbf" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.748914 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.752995 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.753121 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.808038 4907 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.808158 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="a9dc6389-0ad3-4259-aaf2-945493e66aa2" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.201838 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.344800 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.345159 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.385902 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.385966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386521 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386611 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386689 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386778 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386987 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.393866 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.393965 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" containerID="cri-o://c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7" gracePeriod=170 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.538752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.639709 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerID="67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.639776 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerDied","Data":"67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.641326 4907 generic.go:334] "Generic (PLEG): container finished" podID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerID="9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.641354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerDied","Data":"9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.643435 4907 generic.go:334] "Generic (PLEG): container finished" podID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerID="623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.643486 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerDied","Data":"623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645202 4907 scope.go:117] "RemoveContainer" containerID="9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645683 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerID="39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8" exitCode=0 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerDied","Data":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.646809 4907 scope.go:117] "RemoveContainer" containerID="67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.658202 4907 scope.go:117] "RemoveContainer" containerID="623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.738099 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.738395 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830177 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830234 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830295 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.846395 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} pod="openshift-console/console-7b674f54c6-zhrj9" containerMessage="Container console failed liveness probe, will be restarted" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.936704 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.937086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.937275 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140037 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140117 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140133 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140192 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.164016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.521538 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.521847 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.661243 4907 generic.go:334] "Generic (PLEG): container finished" podID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerID="d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5" exitCode=1 Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.661343 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerDied","Data":"d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5"} Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.662242 4907 scope.go:117] "RemoveContainer" containerID="d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.752277 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.762959 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774468 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": read tcp 10.217.0.2:46454->10.217.0.102:8081: read: connection reset by peer" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774511 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": read tcp 10.217.0.2:46452->10.217.0.102:8081: read: connection reset by peer" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774598 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.775446 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.791354 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.791401 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.841932 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842011 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842030 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842084 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.993326 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.993650 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.205716 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.206039 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.212066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238482 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238619 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238480 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.239028 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.320502 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.320549 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.402078 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465459 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465852 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465905 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.466906 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.466934 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" containerID="cri-o://cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57" gracePeriod=30 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.527727 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.527769 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.673728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"2eeea9891ba496331cd3fd22e2dd09e9b59b08d5c9850923975af6681162f64d"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.675373 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.682838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"7e72c78899397eb28b7f44e7716e8cfc6c0725ea73c548b23b002d5b14eecb74"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.683110 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.682928 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.683608 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.684001 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.684052 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.686470 4907 generic.go:334] "Generic (PLEG): container finished" podID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerID="cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.686541 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerDied","Data":"cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.687821 4907 scope.go:117] "RemoveContainer" containerID="cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.690807 4907 generic.go:334] "Generic (PLEG): container finished" podID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerID="0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.690890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerDied","Data":"0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.691671 4907 scope.go:117] "RemoveContainer" containerID="0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.712397 4907 generic.go:334] "Generic (PLEG): container finished" podID="812bcca3-8896-4492-86ff-1df596f0e604" containerID="32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679" exitCode=0 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.712482 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerDied","Data":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.716526 4907 generic.go:334] "Generic (PLEG): container finished" podID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerID="e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.716648 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerDied","Data":"e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.718586 4907 scope.go:117] "RemoveContainer" containerID="e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.718975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"66c71f667fae0c6a02c528ba290895d59ae9ee3b3ece0ee42b91b220c283c810"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.719352 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.723209 4907 generic.go:334] "Generic (PLEG): container finished" podID="018e0dfe-5282-40d5-87db-8551645d6e02" containerID="b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.723324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerDied","Data":"b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.724868 4907 scope.go:117] "RemoveContainer" containerID="b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.729218 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"34abe2ba07118423357146528ed4139f9cc106258253d30ccd6322e1de78d314"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.729976 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.730006 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.730066 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.736885 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"5f2a4d065111347b84e91863ec562e48004c46fef945ae10a6499abec2ff956f"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.737277 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.749727 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.749747 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.815760 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.816107 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024813 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024844 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024927 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025042 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025065 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025091 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025104 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025161 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.361118 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.587464 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.587746 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.660373 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.751294 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"80e4bcdfd7d70a4a810ced5f2b5cfce7d51602948b748f79cce44bc3fb1f2d60"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.751506 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.753381 4907 generic.go:334] "Generic (PLEG): container finished" podID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerID="3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.753446 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerDied","Data":"3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.755699 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerID="fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.755741 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerDied","Data":"fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.757013 4907 scope.go:117] "RemoveContainer" containerID="fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.766176 4907 generic.go:334] "Generic (PLEG): container finished" podID="a05cfe48-4bf5-4199-aefa-de59259798c4" containerID="5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.766251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerDied","Data":"5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.768514 4907 scope.go:117] "RemoveContainer" containerID="3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.779299 4907 scope.go:117] "RemoveContainer" containerID="5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.802331 4907 generic.go:334] "Generic (PLEG): container finished" podID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerID="cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57" exitCode=0 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.802468 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerDied","Data":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.805900 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.805938 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.831745 4907 trace.go:236] Trace[1246640287]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (27-Jan-2026 19:30:27.773) (total time: 1058ms): Jan 27 19:30:28 crc kubenswrapper[4907]: Trace[1246640287]: [1.058200993s] [1.058200993s] END Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.860798 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.012922 4907 trace.go:236] Trace[1645195763]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (27-Jan-2026 19:30:27.689) (total time: 1323ms): Jan 27 19:30:29 crc kubenswrapper[4907]: Trace[1645195763]: [1.323317005s] [1.323317005s] END Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.394654 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.394946 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.465829 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.465890 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.479676 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.862696 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"f9381f1e02136e207f2ee8f3be5aebc0285af746b7e7d1deece6f0da3a8538ed"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.863280 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.913030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"b0ca80dfbf17362ccb5cc75ed398cde1df7189cb54e38ad9b78cd000c58a42bd"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.914378 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.931498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"b76d2fa4132b926c053991ec9229a853e2f66ad2189e4f897765e85dbec0b63d"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.932358 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.968725 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"1a0c53bd8db41eb6a071ec999505e36a82474a27d4fa122750df878996505807"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.974348 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.000118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"1e82eea0a3b9f0d7ce12ae9f179109387de1e78ee9f4db9a68e8e73c7bff2227"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.001194 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.035858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"f632838810d641669fb0b49dfc60ada952cc16b653c17272fb75b415fce7ce8c"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.035889 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.040840 4907 generic.go:334] "Generic (PLEG): container finished" podID="f22de95d-f437-432c-917a-a08c082e02c4" containerID="629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06" exitCode=1 Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.041762 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerDied","Data":"629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.051362 4907 scope.go:117] "RemoveContainer" containerID="629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.054873 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerID="eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e" exitCode=1 Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.054940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerDied","Data":"eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.061236 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"448f7f7cbdec6aecab43fdbad5699810dd48adba4c8b205cbd11a867abb8d56e"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.062400 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.069277 4907 scope.go:117] "RemoveContainer" containerID="eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.097069 4907 generic.go:334] "Generic (PLEG): container finished" podID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerID="b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5" exitCode=0 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.097118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerDied","Data":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102261 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"9c7707486c5f1175326256d08c0328b8d1cfc427d081243107265e7bf96f7ccc"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102939 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102987 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.103011 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.107686 4907 generic.go:334] "Generic (PLEG): container finished" podID="a4aa00b3-8a54-4f84-907d-34a73b93944f" containerID="4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd" exitCode=1 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.107784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerDied","Data":"4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.109297 4907 scope.go:117] "RemoveContainer" containerID="4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.122909 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"ed018f6702dc598ff92e7eed4585570a267b8a651ab1ad783d28f16839530a48"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.123238 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.128895 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"a59cd9310b04d2c3030d9c9137668d4d482a1a3e8db5e305ae5b66810894471c"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.136690 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"e020b203bae7108f7330d39e69972e7d8154282b778b55c83cde88eb9abd4348"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.137673 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.146518 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"6af4de647cca09b70f016ca9c69666adf0119f1f8ae8673efb6e177ed67e3974"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.147151 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.527771 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" containerID="cri-o://641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" gracePeriod=23 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.759289 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" containerID="cri-o://1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d" gracePeriod=22 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.947633 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.160854 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"12893a0063c938ff84fabbbadb9431ecd0cb42b57fe429141f92bcf7596b7b46"} Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.161856 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.161938 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.671647 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.683113 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.691912 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.691971 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.753736 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.779719 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:33 crc kubenswrapper[4907]: I0127 19:30:33.075670 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Jan 27 19:30:33 crc kubenswrapper[4907]: I0127 19:30:33.183768 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"84182b961cc5aee06dab4663b064c9014f34de7dacf97a434dd8c57dc54ad909"} Jan 27 19:30:34 crc kubenswrapper[4907]: I0127 19:30:34.677901 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.006456 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:35 crc kubenswrapper[4907]: W0127 19:30:35.196815 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a233285_9953_450d_a8a9_b7dc65737a09.slice/crio-3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441 WatchSource:0}: Error finding container 3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441: Status 404 returned error can't find the container with id 3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441 Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.211437 4907 generic.go:334] "Generic (PLEG): container finished" podID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" exitCode=0 Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.211485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerDied","Data":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.805015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.876262 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.226067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerStarted","Data":"6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.226358 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerStarted","Data":"3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.231747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"660c28f22184627338c57d8f4762e86d2f2775b412814b98dc5dce3a067ac3b8"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.290140 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" podStartSLOduration=36.288805291 podStartE2EDuration="36.288805291s" podCreationTimestamp="2026-01-27 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:30:36.277014067 +0000 UTC m=+5091.406296679" watchObservedRunningTime="2026-01-27 19:30:36.288805291 +0000 UTC m=+5091.418087903" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.787089 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.793108 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.803127 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.807863 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.808016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.893575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.043767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.171637 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.207876 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.226861 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.247713 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.354238 4907 generic.go:334] "Generic (PLEG): container finished" podID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerID="1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d" exitCode=0 Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.354535 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerDied","Data":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.397696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.492666 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.562233 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.718864 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.373820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"512d82b9b7ca402fb2e90662c6f6e6f6fd33ab87f4263b765821c926f7c25ec4"} Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.377992 4907 generic.go:334] "Generic (PLEG): container finished" podID="4a233285-9953-450d-a8a9-b7dc65737a09" containerID="6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035" exitCode=0 Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.378043 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerDied","Data":"6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035"} Jan 27 19:30:39 crc kubenswrapper[4907]: I0127 19:30:39.511385 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.595102 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.680966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.681093 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.687619 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.688025 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" containerID="cri-o://6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada" gracePeriod=30 Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.714606 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.715059 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.715132 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.717949 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.773224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.776840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58" (OuterVolumeSpecName: "kube-api-access-6ph58") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "kube-api-access-6ph58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827162 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827213 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827230 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.177784 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.178303 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.413973 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerDied","Data":"3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441"} Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.414477 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.419189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.739785 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.763656 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 19:30:42 crc kubenswrapper[4907]: I0127 19:30:42.654201 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:42 crc kubenswrapper[4907]: I0127 19:30:42.654647 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.086752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.096487 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" containerID="cri-o://584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9" gracePeriod=15 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.442049 4907 generic.go:334] "Generic (PLEG): container finished" podID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerID="584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9" exitCode=0 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.442149 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerDied","Data":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.444051 4907 generic.go:334] "Generic (PLEG): container finished" podID="621bccf6-c3e9-4b2d-821b-217848191c27" containerID="6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada" exitCode=0 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.444096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerDied","Data":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.804714 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" path="/var/lib/kubelet/pods/a8fea3de-b1db-4c31-8636-329b2d296f02/volumes" Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.512788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"d8633d387b7e48ef0d9854b44d24d05af7e0a2ff93afe9d37d5aceefcb36ff39"} Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.517501 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.818163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:46 crc kubenswrapper[4907]: I0127 19:30:46.527455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"460ae35a74b5a71c62f19fa9ad954c218661ff94bf5e67271f5521cdf31822d7"} Jan 27 19:30:47 crc kubenswrapper[4907]: I0127 19:30:47.276299 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.411419 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" containerID="cri-o://20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615" gracePeriod=15 Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.616859 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b674f54c6-zhrj9_a2362241-225f-40e2-9be3-67766a65316b/console/0.log" Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.617213 4907 generic.go:334] "Generic (PLEG): container finished" podID="a2362241-225f-40e2-9be3-67766a65316b" containerID="20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615" exitCode=2 Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.617277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerDied","Data":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.294824 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.648670 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b674f54c6-zhrj9_a2362241-225f-40e2-9be3-67766a65316b/console/0.log" Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.648732 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"25860d96ab09486b84ce9683bdf1c2b971b91a2b1fa03a3f523c812227772bf9"} Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.830361 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.831166 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.835209 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.522056 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.529103 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.529178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.530292 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.530362 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" gracePeriod=600 Jan 27 19:30:56 crc kubenswrapper[4907]: E0127 19:30:56.711686 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.730071 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" exitCode=0 Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.731091 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.735754 4907 scope.go:117] "RemoveContainer" containerID="3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.736372 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:30:56 crc kubenswrapper[4907]: E0127 19:30:56.737465 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.738283 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:57 crc kubenswrapper[4907]: I0127 19:30:57.301034 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:00 crc kubenswrapper[4907]: I0127 19:31:00.754910 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 19:31:02 crc kubenswrapper[4907]: I0127 19:31:02.229378 4907 scope.go:117] "RemoveContainer" containerID="12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc" Jan 27 19:31:02 crc kubenswrapper[4907]: I0127 19:31:02.299039 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:07 crc kubenswrapper[4907]: I0127 19:31:07.309726 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:10 crc kubenswrapper[4907]: I0127 19:31:10.748542 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:10 crc kubenswrapper[4907]: E0127 19:31:10.750814 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:12 crc kubenswrapper[4907]: I0127 19:31:12.297923 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:17 crc kubenswrapper[4907]: I0127 19:31:17.302651 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.126184 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.519380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.904990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 19:31:20 crc kubenswrapper[4907]: I0127 19:31:20.003593 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 19:31:22 crc kubenswrapper[4907]: I0127 19:31:22.315688 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.060590 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:23 crc kubenswrapper[4907]: E0127 19:31:23.067485 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.067523 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.069122 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.086143 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.174057 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203317 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203667 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203720 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.311644 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.315161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.345475 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.442469 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:25 crc kubenswrapper[4907]: I0127 19:31:25.497160 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:25 crc kubenswrapper[4907]: I0127 19:31:25.800393 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:25 crc kubenswrapper[4907]: E0127 19:31:25.801411 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448134 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78" exitCode=0 Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78"} Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8"} Jan 27 19:31:28 crc kubenswrapper[4907]: I0127 19:31:28.472241 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1"} Jan 27 19:31:30 crc kubenswrapper[4907]: I0127 19:31:30.521945 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1" exitCode=0 Jan 27 19:31:30 crc kubenswrapper[4907]: I0127 19:31:30.522009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1"} Jan 27 19:31:31 crc kubenswrapper[4907]: I0127 19:31:31.543475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7"} Jan 27 19:31:31 crc kubenswrapper[4907]: I0127 19:31:31.568354 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5v8dc" podStartSLOduration=4.940794692 podStartE2EDuration="9.566417816s" podCreationTimestamp="2026-01-27 19:31:22 +0000 UTC" firstStartedPulling="2026-01-27 19:31:26.451480014 +0000 UTC m=+5141.580762626" lastFinishedPulling="2026-01-27 19:31:31.077103138 +0000 UTC m=+5146.206385750" observedRunningTime="2026-01-27 19:31:31.563331518 +0000 UTC m=+5146.692614130" watchObservedRunningTime="2026-01-27 19:31:31.566417816 +0000 UTC m=+5146.695700418" Jan 27 19:31:33 crc kubenswrapper[4907]: I0127 19:31:33.442990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:33 crc kubenswrapper[4907]: I0127 19:31:33.444981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:34 crc kubenswrapper[4907]: I0127 19:31:34.501021 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" probeResult="failure" output=< Jan 27 19:31:34 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:31:34 crc kubenswrapper[4907]: > Jan 27 19:31:39 crc kubenswrapper[4907]: I0127 19:31:39.748215 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:39 crc kubenswrapper[4907]: E0127 19:31:39.749134 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:44 crc kubenswrapper[4907]: I0127 19:31:44.512352 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" probeResult="failure" output=< Jan 27 19:31:44 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:31:44 crc kubenswrapper[4907]: > Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.512072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.598017 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.748996 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:53 crc kubenswrapper[4907]: E0127 19:31:53.749401 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:54 crc kubenswrapper[4907]: I0127 19:31:54.236553 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:54 crc kubenswrapper[4907]: I0127 19:31:54.853851 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" containerID="cri-o://7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" gracePeriod=2 Jan 27 19:31:55 crc kubenswrapper[4907]: I0127 19:31:55.870037 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" exitCode=0 Jan 27 19:31:55 crc kubenswrapper[4907]: I0127 19:31:55.870434 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7"} Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.081668 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.208825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.208958 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.209067 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.213632 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities" (OuterVolumeSpecName: "utilities") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.231017 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5" (OuterVolumeSpecName: "kube-api-access-ghgf5") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "kube-api-access-ghgf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.289053 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312330 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312365 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312376 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8"} Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885545 4907 scope.go:117] "RemoveContainer" containerID="7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" Jan 27 19:31:57 crc kubenswrapper[4907]: E0127 19:31:57.031002 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff3f9b6_886f_4900_ba62_3d79659faabc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff3f9b6_886f_4900_ba62_3d79659faabc.slice/crio-ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8\": RecentStats: unable to find data in memory cache]" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.249915 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.253699 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.285889 4907 scope.go:117] "RemoveContainer" containerID="427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.330175 4907 scope.go:117] "RemoveContainer" containerID="23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.762273 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" path="/var/lib/kubelet/pods/9ff3f9b6-886f-4900-ba62-3d79659faabc/volumes" Jan 27 19:32:02 crc kubenswrapper[4907]: I0127 19:32:02.731657 4907 scope.go:117] "RemoveContainer" containerID="13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4" Jan 27 19:32:02 crc kubenswrapper[4907]: I0127 19:32:02.790901 4907 scope.go:117] "RemoveContainer" containerID="724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827" Jan 27 19:32:08 crc kubenswrapper[4907]: I0127 19:32:08.749197 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:08 crc kubenswrapper[4907]: E0127 19:32:08.751113 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:21 crc kubenswrapper[4907]: I0127 19:32:21.748766 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:21 crc kubenswrapper[4907]: E0127 19:32:21.749959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:26 crc kubenswrapper[4907]: I0127 19:32:26.274697 4907 generic.go:334] "Generic (PLEG): container finished" podID="019838dd-5c5f-40f0-a169-09156549d64c" containerID="a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8" exitCode=1 Jan 27 19:32:26 crc kubenswrapper[4907]: I0127 19:32:26.274809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerDied","Data":"a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8"} Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.112062 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.243944 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244062 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244108 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244178 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244348 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244431 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244482 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244544 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244642 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.246850 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data" (OuterVolumeSpecName: "config-data") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.247971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.250750 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.254095 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t" (OuterVolumeSpecName: "kube-api-access-2cd2t") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "kube-api-access-2cd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.254328 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.281998 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.288521 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.290701 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.304965 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerDied","Data":"e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560"} Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.305011 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.305033 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.321844 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348482 4907 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348675 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348690 4907 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348703 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348715 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348724 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348734 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348742 4907 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.350156 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.382061 4907 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.452135 4907 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:35 crc kubenswrapper[4907]: I0127 19:32:35.757715 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:35 crc kubenswrapper[4907]: E0127 19:32:35.758662 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.577894 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583053 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583104 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583146 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-content" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583160 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-content" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583207 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-utilities" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583222 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-utilities" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583268 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583281 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.584238 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.584291 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.587795 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.590709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.598307 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5d7cl" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.657959 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.658241 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.761192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.761478 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.763542 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.795470 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.837625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.933864 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:41 crc kubenswrapper[4907]: I0127 19:32:41.446735 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:41 crc kubenswrapper[4907]: I0127 19:32:41.484627 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9364bcb6-d99e-42e9-9f1a-58054d2a59ab","Type":"ContainerStarted","Data":"7edd0bb512134bead825dc1eb3c9eb0bfd7d46c4cb8f42e46fe3224200323ddd"} Jan 27 19:32:43 crc kubenswrapper[4907]: I0127 19:32:43.514381 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9364bcb6-d99e-42e9-9f1a-58054d2a59ab","Type":"ContainerStarted","Data":"58fafef6bcfd0c50884791b2af0b4f4bd1299c718f4b7731231e7b8e721c19d6"} Jan 27 19:32:43 crc kubenswrapper[4907]: I0127 19:32:43.536799 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.115243402 podStartE2EDuration="3.536780733s" podCreationTimestamp="2026-01-27 19:32:40 +0000 UTC" firstStartedPulling="2026-01-27 19:32:41.460715137 +0000 UTC m=+5216.589997749" lastFinishedPulling="2026-01-27 19:32:42.882252468 +0000 UTC m=+5218.011535080" observedRunningTime="2026-01-27 19:32:43.530469415 +0000 UTC m=+5218.659752037" watchObservedRunningTime="2026-01-27 19:32:43.536780733 +0000 UTC m=+5218.666063355" Jan 27 19:32:47 crc kubenswrapper[4907]: I0127 19:32:47.751018 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:47 crc kubenswrapper[4907]: E0127 19:32:47.751955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:55 crc kubenswrapper[4907]: I0127 19:32:55.668582 4907 generic.go:334] "Generic (PLEG): container finished" podID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerID="c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7" exitCode=0 Jan 27 19:32:55 crc kubenswrapper[4907]: I0127 19:32:55.669064 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerDied","Data":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} Jan 27 19:32:56 crc kubenswrapper[4907]: I0127 19:32:56.697388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"73e89c3734604a874e28a202e8b53b97965fe17e4a85b2841a460f720c7962ab"} Jan 27 19:32:58 crc kubenswrapper[4907]: I0127 19:32:58.748497 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:58 crc kubenswrapper[4907]: E0127 19:32:58.749703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:02 crc kubenswrapper[4907]: I0127 19:33:02.965043 4907 scope.go:117] "RemoveContainer" containerID="0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" Jan 27 19:33:04 crc kubenswrapper[4907]: I0127 19:33:04.357919 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:04 crc kubenswrapper[4907]: I0127 19:33:04.357978 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:09 crc kubenswrapper[4907]: I0127 19:33:09.750452 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:09 crc kubenswrapper[4907]: E0127 19:33:09.752331 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:22 crc kubenswrapper[4907]: I0127 19:33:22.748873 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:22 crc kubenswrapper[4907]: E0127 19:33:22.750185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:24 crc kubenswrapper[4907]: I0127 19:33:24.363656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:24 crc kubenswrapper[4907]: I0127 19:33:24.370314 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.355309 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.358415 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.369502 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.473945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.474078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.474166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579274 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579325 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.580689 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.581187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.602090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.688463 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:30 crc kubenswrapper[4907]: I0127 19:33:30.344913 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.114974 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" exitCode=0 Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.115051 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde"} Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.115272 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"6a2aecfd10f37bbdb917506ea88c69939cf002c52a320be18eb04643f8f8eece"} Jan 27 19:33:33 crc kubenswrapper[4907]: I0127 19:33:33.140488 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} Jan 27 19:33:34 crc kubenswrapper[4907]: I0127 19:33:34.153391 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" exitCode=0 Jan 27 19:33:34 crc kubenswrapper[4907]: I0127 19:33:34.153485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} Jan 27 19:33:35 crc kubenswrapper[4907]: I0127 19:33:35.167058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} Jan 27 19:33:35 crc kubenswrapper[4907]: I0127 19:33:35.191185 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jhvg" podStartSLOduration=2.662723274 podStartE2EDuration="6.191166953s" podCreationTimestamp="2026-01-27 19:33:29 +0000 UTC" firstStartedPulling="2026-01-27 19:33:31.117921709 +0000 UTC m=+5266.247204341" lastFinishedPulling="2026-01-27 19:33:34.646365388 +0000 UTC m=+5269.775648020" observedRunningTime="2026-01-27 19:33:35.186206253 +0000 UTC m=+5270.315488875" watchObservedRunningTime="2026-01-27 19:33:35.191166953 +0000 UTC m=+5270.320449565" Jan 27 19:33:37 crc kubenswrapper[4907]: I0127 19:33:37.748137 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:37 crc kubenswrapper[4907]: E0127 19:33:37.748857 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.688878 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.689522 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.741408 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:40 crc kubenswrapper[4907]: I0127 19:33:40.501689 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:40 crc kubenswrapper[4907]: I0127 19:33:40.560260 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.243523 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jhvg" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" containerID="cri-o://8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" gracePeriod=2 Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.849488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974353 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974660 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974723 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.975075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities" (OuterVolumeSpecName: "utilities") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.975519 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.979840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8" (OuterVolumeSpecName: "kube-api-access-vknz8") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "kube-api-access-vknz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.022274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.077174 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.077206 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255573 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" exitCode=0 Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255657 4907 scope.go:117] "RemoveContainer" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"6a2aecfd10f37bbdb917506ea88c69939cf002c52a320be18eb04643f8f8eece"} Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.278163 4907 scope.go:117] "RemoveContainer" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.299697 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.306125 4907 scope.go:117] "RemoveContainer" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.317158 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.355565 4907 scope.go:117] "RemoveContainer" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.358995 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": container with ID starting with 8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132 not found: ID does not exist" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.359058 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} err="failed to get container status \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": rpc error: code = NotFound desc = could not find container \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": container with ID starting with 8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132 not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.359090 4907 scope.go:117] "RemoveContainer" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.362488 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": container with ID starting with ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd not found: ID does not exist" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.362647 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} err="failed to get container status \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": rpc error: code = NotFound desc = could not find container \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": container with ID starting with ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.362758 4907 scope.go:117] "RemoveContainer" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.363680 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": container with ID starting with 887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde not found: ID does not exist" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.363808 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde"} err="failed to get container status \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": rpc error: code = NotFound desc = could not find container \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": container with ID starting with 887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.770417 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fc7a46-9732-43c0-af69-22c598778530" path="/var/lib/kubelet/pods/c9fc7a46-9732-43c0-af69-22c598778530/volumes" Jan 27 19:33:52 crc kubenswrapper[4907]: I0127 19:33:52.748135 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:52 crc kubenswrapper[4907]: E0127 19:33:52.749132 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:06 crc kubenswrapper[4907]: I0127 19:34:06.748855 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:06 crc kubenswrapper[4907]: E0127 19:34:06.751922 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:19 crc kubenswrapper[4907]: I0127 19:34:19.747835 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:19 crc kubenswrapper[4907]: E0127 19:34:19.748841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.211475 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220386 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-content" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220404 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-content" Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220436 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-utilities" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220442 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-utilities" Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220458 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220466 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220679 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.221957 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.226336 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cl48g"/"default-dockercfg-j8vh7" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.234179 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cl48g"/"kube-root-ca.crt" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.234519 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cl48g"/"openshift-service-ca.crt" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.247129 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.307191 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.307312 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.409409 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.409600 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.410150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.435273 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.552690 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:31 crc kubenswrapper[4907]: I0127 19:34:31.075601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:31 crc kubenswrapper[4907]: I0127 19:34:31.870850 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"8a942117ac3981a77accc1c44d9271b4892567e310d9c25d125f2987cf3afee3"} Jan 27 19:34:33 crc kubenswrapper[4907]: I0127 19:34:33.748791 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:33 crc kubenswrapper[4907]: E0127 19:34:33.749588 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.021874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee"} Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.022481 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538"} Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.039417 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cl48g/must-gather-s7n67" podStartSLOduration=1.68298162 podStartE2EDuration="14.039395735s" podCreationTimestamp="2026-01-27 19:34:30 +0000 UTC" firstStartedPulling="2026-01-27 19:34:31.080039071 +0000 UTC m=+5326.209321683" lastFinishedPulling="2026-01-27 19:34:43.436453186 +0000 UTC m=+5338.565735798" observedRunningTime="2026-01-27 19:34:44.035042322 +0000 UTC m=+5339.164324934" watchObservedRunningTime="2026-01-27 19:34:44.039395735 +0000 UTC m=+5339.168678347" Jan 27 19:34:45 crc kubenswrapper[4907]: I0127 19:34:45.760846 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:45 crc kubenswrapper[4907]: E0127 19:34:45.761634 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.558938 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.562922 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.584330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.584649 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.687030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.687515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.688889 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.711225 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.890419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:52 crc kubenswrapper[4907]: I0127 19:34:52.144661 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerStarted","Data":"4ab9bf9797d13dbe65626538685553d1d2e55e44420ed8b27f3d4d0d0f17578a"} Jan 27 19:34:58 crc kubenswrapper[4907]: I0127 19:34:58.748288 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:58 crc kubenswrapper[4907]: E0127 19:34:58.749247 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.253415 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.256216 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs8bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-vj7kk_openshift-must-gather-cl48g(1be94d37-98c3-436c-a243-ddf6745f4d7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.257511 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.428420 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.432089 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.437703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.443155 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473684 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473707 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575526 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.576029 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.576038 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.606510 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.769517 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:09 crc kubenswrapper[4907]: I0127 19:35:09.375576 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:09 crc kubenswrapper[4907]: I0127 19:35:09.450012 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"87d77272dc3b6d8210720fa55629434c59d66bd02a70f6738c235b56f4cea0dc"} Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.471159 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" exitCode=0 Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.471253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052"} Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.749877 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:10 crc kubenswrapper[4907]: E0127 19:35:10.750211 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:12 crc kubenswrapper[4907]: I0127 19:35:12.495026 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} Jan 27 19:35:18 crc kubenswrapper[4907]: I0127 19:35:18.576626 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" exitCode=0 Jan 27 19:35:18 crc kubenswrapper[4907]: I0127 19:35:18.577444 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} Jan 27 19:35:19 crc kubenswrapper[4907]: I0127 19:35:19.610988 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} Jan 27 19:35:19 crc kubenswrapper[4907]: I0127 19:35:19.641943 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwr9q" podStartSLOduration=3.145876365 podStartE2EDuration="11.641926128s" podCreationTimestamp="2026-01-27 19:35:08 +0000 UTC" firstStartedPulling="2026-01-27 19:35:10.477967733 +0000 UTC m=+5365.607250345" lastFinishedPulling="2026-01-27 19:35:18.974017496 +0000 UTC m=+5374.103300108" observedRunningTime="2026-01-27 19:35:19.635134706 +0000 UTC m=+5374.764417318" watchObservedRunningTime="2026-01-27 19:35:19.641926128 +0000 UTC m=+5374.771208740" Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.678196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerStarted","Data":"4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23"} Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.703865 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podStartSLOduration=2.094540754 podStartE2EDuration="34.703842438s" podCreationTimestamp="2026-01-27 19:34:51 +0000 UTC" firstStartedPulling="2026-01-27 19:34:51.962415098 +0000 UTC m=+5347.091697710" lastFinishedPulling="2026-01-27 19:35:24.571716782 +0000 UTC m=+5379.700999394" observedRunningTime="2026-01-27 19:35:25.696713796 +0000 UTC m=+5380.825996408" watchObservedRunningTime="2026-01-27 19:35:25.703842438 +0000 UTC m=+5380.833125050" Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.757962 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:25 crc kubenswrapper[4907]: E0127 19:35:25.758275 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:28 crc kubenswrapper[4907]: I0127 19:35:28.770433 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:28 crc kubenswrapper[4907]: I0127 19:35:28.771178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:29 crc kubenswrapper[4907]: I0127 19:35:29.849491 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:29 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:29 crc kubenswrapper[4907]: > Jan 27 19:35:38 crc kubenswrapper[4907]: I0127 19:35:38.748653 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:38 crc kubenswrapper[4907]: E0127 19:35:38.750019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:39 crc kubenswrapper[4907]: I0127 19:35:39.835986 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:39 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:39 crc kubenswrapper[4907]: > Jan 27 19:35:49 crc kubenswrapper[4907]: I0127 19:35:49.838900 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:49 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:49 crc kubenswrapper[4907]: > Jan 27 19:35:52 crc kubenswrapper[4907]: I0127 19:35:52.748039 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:52 crc kubenswrapper[4907]: E0127 19:35:52.749043 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:59 crc kubenswrapper[4907]: I0127 19:35:59.828149 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:59 crc kubenswrapper[4907]: > Jan 27 19:36:07 crc kubenswrapper[4907]: I0127 19:36:07.748233 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.152196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.830298 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.896609 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:09 crc kubenswrapper[4907]: I0127 19:36:09.615304 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:10 crc kubenswrapper[4907]: I0127 19:36:10.193263 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" containerID="cri-o://be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" gracePeriod=2 Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.157305 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162793 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.166718 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities" (OuterVolumeSpecName: "utilities") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.174089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5" (OuterVolumeSpecName: "kube-api-access-kwch5") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "kube-api-access-kwch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239902 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" exitCode=0 Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239991 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"87d77272dc3b6d8210720fa55629434c59d66bd02a70f6738c235b56f4cea0dc"} Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.240007 4907 scope.go:117] "RemoveContainer" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239988 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.266217 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.266256 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.279149 4907 scope.go:117] "RemoveContainer" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.312228 4907 scope.go:117] "RemoveContainer" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.317955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.369362 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.387372 4907 scope.go:117] "RemoveContainer" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.388063 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": container with ID starting with be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b not found: ID does not exist" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.389122 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} err="failed to get container status \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": rpc error: code = NotFound desc = could not find container \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": container with ID starting with be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.389173 4907 scope.go:117] "RemoveContainer" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.390411 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": container with ID starting with 08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a not found: ID does not exist" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390450 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} err="failed to get container status \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": rpc error: code = NotFound desc = could not find container \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": container with ID starting with 08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390465 4907 scope.go:117] "RemoveContainer" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.390866 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": container with ID starting with 8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052 not found: ID does not exist" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390938 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052"} err="failed to get container status \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": rpc error: code = NotFound desc = could not find container \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": container with ID starting with 8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052 not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.582821 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.593169 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.773513 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" path="/var/lib/kubelet/pods/b3b5fc01-3f3e-4604-bec9-99128bef3139/volumes" Jan 27 19:36:17 crc kubenswrapper[4907]: I0127 19:36:17.325832 4907 generic.go:334] "Generic (PLEG): container finished" podID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerID="4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23" exitCode=0 Jan 27 19:36:17 crc kubenswrapper[4907]: I0127 19:36:17.325953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerDied","Data":"4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23"} Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.493257 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.532532 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.545006 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677315 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"1be94d37-98c3-436c-a243-ddf6745f4d7a\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677539 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"1be94d37-98c3-436c-a243-ddf6745f4d7a\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host" (OuterVolumeSpecName: "host") pod "1be94d37-98c3-436c-a243-ddf6745f4d7a" (UID: "1be94d37-98c3-436c-a243-ddf6745f4d7a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.679647 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.684655 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj" (OuterVolumeSpecName: "kube-api-access-cs8bj") pod "1be94d37-98c3-436c-a243-ddf6745f4d7a" (UID: "1be94d37-98c3-436c-a243-ddf6745f4d7a"). InnerVolumeSpecName "kube-api-access-cs8bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.803215 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.346759 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab9bf9797d13dbe65626538685553d1d2e55e44420ed8b27f3d4d0d0f17578a" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.346886 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.761697 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" path="/var/lib/kubelet/pods/1be94d37-98c3-436c-a243-ddf6745f4d7a/volumes" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.816716 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817254 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-content" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817272 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-content" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-utilities" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817315 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-utilities" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817337 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817346 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817370 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817378 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817806 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817841 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.818642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.929866 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.930854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032546 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032741 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.256488 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.438387 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.371714 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerID="7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c" exitCode=0 Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.371809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" event={"ID":"6dd3c651-908f-45ce-ac85-5fa959304ab4","Type":"ContainerDied","Data":"7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c"} Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.372327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" event={"ID":"6dd3c651-908f-45ce-ac85-5fa959304ab4","Type":"ContainerStarted","Data":"939824f31d588a04ef1a1007458f4cc9252646915f408a5fd45ad47e95372388"} Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.517362 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590286 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"6dd3c651-908f-45ce-ac85-5fa959304ab4\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590748 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"6dd3c651-908f-45ce-ac85-5fa959304ab4\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590344 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host" (OuterVolumeSpecName: "host") pod "6dd3c651-908f-45ce-ac85-5fa959304ab4" (UID: "6dd3c651-908f-45ce-ac85-5fa959304ab4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.591744 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.599457 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w" (OuterVolumeSpecName: "kube-api-access-d7x9w") pod "6dd3c651-908f-45ce-ac85-5fa959304ab4" (UID: "6dd3c651-908f-45ce-ac85-5fa959304ab4"). InnerVolumeSpecName "kube-api-access-d7x9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.694037 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.302658 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.314009 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.393081 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939824f31d588a04ef1a1007458f4cc9252646915f408a5fd45ad47e95372388" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.393406 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.760886 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" path="/var/lib/kubelet/pods/6dd3c651-908f-45ce-ac85-5fa959304ab4/volumes" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.495426 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:24 crc kubenswrapper[4907]: E0127 19:36:24.496481 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.496507 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.496847 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.497999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.636157 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.636396 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.738944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.739061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.739138 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.757296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.823938 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.414639 4907 generic.go:334] "Generic (PLEG): container finished" podID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerID="6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a" exitCode=0 Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.414748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" event={"ID":"518a242e-cb81-4a13-ab33-41ce20f654ae","Type":"ContainerDied","Data":"6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a"} Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.415038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" event={"ID":"518a242e-cb81-4a13-ab33-41ce20f654ae","Type":"ContainerStarted","Data":"c80417f25860d94c303ce7e8b5a07dab905dece484c79d6f2b70fdd30d8969cd"} Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.457663 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.472305 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.569217 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.693795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"518a242e-cb81-4a13-ab33-41ce20f654ae\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.693881 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"518a242e-cb81-4a13-ab33-41ce20f654ae\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.695424 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host" (OuterVolumeSpecName: "host") pod "518a242e-cb81-4a13-ab33-41ce20f654ae" (UID: "518a242e-cb81-4a13-ab33-41ce20f654ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.705041 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl" (OuterVolumeSpecName: "kube-api-access-fgngl") pod "518a242e-cb81-4a13-ab33-41ce20f654ae" (UID: "518a242e-cb81-4a13-ab33-41ce20f654ae"). InnerVolumeSpecName "kube-api-access-fgngl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.797162 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.797205 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.438801 4907 scope.go:117] "RemoveContainer" containerID="6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.438850 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.763153 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" path="/var/lib/kubelet/pods/518a242e-cb81-4a13-ab33-41ce20f654ae/volumes" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.241774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-api/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.458600 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-listener/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.534313 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-evaluator/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.544199 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-notifier/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.720422 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7bdc78db-g6vvs_eb7e48e3-f92d-4ee4-9074-9e035a54c8dc/barbican-api-log/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.759519 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7bdc78db-g6vvs_eb7e48e3-f92d-4ee4-9074-9e035a54c8dc/barbican-api/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.894674 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f9fb848c6-w9s7n_06cb3a1d-b998-43fe-8939-29cd2c3fd31f/barbican-keystone-listener/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.036261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-cc6c576c9-l5q6m_72844033-17b7-4a8b-973d-f8ef443cd529/barbican-worker/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.060708 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f9fb848c6-w9s7n_06cb3a1d-b998-43fe-8939-29cd2c3fd31f/barbican-keystone-listener-log/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.195933 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-cc6c576c9-l5q6m_72844033-17b7-4a8b-973d-f8ef443cd529/barbican-worker-log/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.310895 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj_172533fc-3de0-4a67-91d4-d54dbbf6e0e8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.619793 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-central-agent/1.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.156447 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-notification-agent/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.189216 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/proxy-httpd/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.199588 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/sg-core/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.215189 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-central-agent/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.397228 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f62bd754-7667-406a-9883-2015ddcc3f16/cinder-api-log/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.495777 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f62bd754-7667-406a-9883-2015ddcc3f16/cinder-api/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.577624 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/cinder-scheduler/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.634528 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/cinder-scheduler/1.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.707433 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/probe/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.805831 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5_0aabc401-314e-438d-920e-1f984949944c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.939985 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn_b8f3066f-ed2e-42b5-94ff-e989771dbe8e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.037348 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/init/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.334727 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/dnsmasq-dns/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.339051 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j_ad792b6c-ce47-4ef4-964c-e91423a94f1b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.343448 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/init/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.110458 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34586e59-e405-4871-9eb7-6ec0251bc992/glance-httpd/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.141572 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34586e59-e405-4871-9eb7-6ec0251bc992/glance-log/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.363374 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_79b7035b-7e7c-40e4-86a8-d1499df47d5f/glance-httpd/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.418540 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_79b7035b-7e7c-40e4-86a8-d1499df47d5f/glance-log/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.750940 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7b8679c4d-pw2cq_14d9243a-0abc-40ce-9881-eef907bdafe3/heat-api/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.135261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-h47vc_4cadb1da-1dd2-49ac-a171-c672c006bfa8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.162824 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-668f78b-db9cs_0d2540a9-525b-46c6-b0ae-23e163484c98/heat-engine/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.358921 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-96749fcd4-hh92n_effdf66a-d041-45e1-a1f0-bd1367a2d80a/heat-cfnapi/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.397513 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fhxmq_daa3c495-5c9e-45cf-b66a-c452e54e9c06/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.731287 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492341-l97qr_6e412045-8e45-4718-98e5-17e76c69623a/keystone-cron/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.037653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_edbdf1e9-d0d7-458d-8f5a-891ee37d7483/kube-state-metrics/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.136343 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kxr54_a1ab6c99-0bb2-45ca-9dc8-1d6da396d011/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.260189 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-k5dbc_cd8ce37e-984e-48a7-afcf-98798042a1c4/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.303304 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-84847858bd-jp29w_345bd96a-9890-4264-886f-edccc999706b/keystone-api/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.576999 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_611de5af-e33a-4aca-88c7-201f7c0e6cf9/mysqld-exporter/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.914905 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc_30518ac3-ca77-4963-8ab9-1f0dd9c596eb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.996570 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6c685b5-88m65_eb34862c-021c-4e5e-b4c0-ceffb9222438/neutron-httpd/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.146406 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6c685b5-88m65_eb34862c-021c-4e5e-b4c0-ceffb9222438/neutron-api/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.679065 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aafbf219-964f-4436-964e-7ad85e0eb56b/nova-api-log/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.726110 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7a7fd860-ac95-4571-99c5-b416f9a9bae9/nova-cell0-conductor-conductor/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.122536 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aafbf219-964f-4436-964e-7ad85e0eb56b/nova-api-api/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.141948 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4d13569d-0cc7-4ce3-ae16-b72ef4ea170c/nova-cell1-conductor-conductor/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.142355 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3257c75e-f45f-4166-b7ba-66c1990ac2dc/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.414266 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7/nova-metadata-log/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.428945 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cjhx8_c3ad9414-0787-40c9-a907-d59ec160f1dd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.917270 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a/nova-scheduler-scheduler/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.988739 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/mysql-bootstrap/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.173314 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/galera/1.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.182829 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/mysql-bootstrap/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.241898 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/galera/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.532521 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/mysql-bootstrap/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.045495 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/galera/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.056437 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/galera/1.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.097928 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/mysql-bootstrap/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.274342 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8cea1342-da85-42e5-a54b-98b132f7871f/openstackclient/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.577965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96prz_daaea3c0-a88d-442f-be06-bb95b2825fcc/ovn-controller/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.656660 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jxkhc_af6ab393-1e13-4683-81ae-6e28d9261d30/openstack-network-exporter/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.848112 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server-init/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.025341 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7/nova-metadata-metadata/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.041774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server-init/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.046238 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovs-vswitchd/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.174812 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.424011 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c4f5ec64-0863-45ef-9090-4768ecd34667/openstack-network-exporter/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.466058 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ldwl4_1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.179174 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c4f5ec64-0863-45ef-9090-4768ecd34667/ovn-northd/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.180756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_32811f4d-c205-437d-a06c-ac4fff30cead/openstack-network-exporter/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.253275 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_32811f4d-c205-437d-a06c-ac4fff30cead/ovsdbserver-nb/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.456345 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7185e8ed-9479-43cc-814b-cfcd26e548a5/openstack-network-exporter/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.499258 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7185e8ed-9479-43cc-814b-cfcd26e548a5/ovsdbserver-sb/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.884031 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/init-config-reloader/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.885844 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bb5448674-jfs9k_4038dea7-e4ef-436d-baf3-47f8757e3bc0/placement-log/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.919006 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bb5448674-jfs9k_4038dea7-e4ef-436d-baf3-47f8757e3bc0/placement-api/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.071293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/init-config-reloader/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.115739 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/prometheus/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.138226 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/config-reloader/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.173605 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/thanos-sidecar/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.344311 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.024983 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.037918 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/rabbitmq/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.038142 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.348653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.432648 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.489681 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/rabbitmq/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.616697 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.770585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.785293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/rabbitmq/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.098297 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/setup-container/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.209689 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb_9dcf4e25-6609-484b-98b6-a7c96c0a2c4a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.236500 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/rabbitmq/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.501568 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tgbss_2872f844-3f1a-4d9b-8f96-5cc01d0cae12/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.632788 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4_de193c6b-eba4-4eb3-95c4-0d7fe875691f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.835261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-56b44_ff08f4dc-f4e3-4e83-b922-32b6296fbee0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.025776 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vlfg7_71334cb5-9354-4f68-91bf-8631e5fa045a/ssh-known-hosts-edpm-deployment/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.222370 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d47577fc9-fz5kg_bfb5201d-eb44-42cb-a5ab-49520cc1e741/proxy-server/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.359593 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-m9rr7_a5ce2510-00de-4a5b-8d9d-578b21229c8c/swift-ring-rebalance/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.366155 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d47577fc9-fz5kg_bfb5201d-eb44-42cb-a5ab-49520cc1e741/proxy-httpd/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.601570 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-auditor/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.612965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-reaper/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.637340 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-replicator/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.804793 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-auditor/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.807332 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-server/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.921213 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-replicator/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.964317 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-server/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.308263 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-auditor/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.317382 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-updater/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.425447 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-expirer/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.524198 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-replicator/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.722927 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-server/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.741453 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-updater/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.795438 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/rsync/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.998595 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/swift-recon-cron/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.162535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-98h4r_fbb41855-75d9-4678-8e5c-7602c99dbf1c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.305682 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx_36c00f4a-e4e0-472b-a51c-510d44296cf8/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.600654 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9364bcb6-d99e-42e9-9f1a-58054d2a59ab/test-operator-logs-container/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.608767 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_019838dd-5c5f-40f0-a169-09156549d64c/tempest-tests-tempest-tests-runner/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.800113 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2_907876b3-4761-4612-9c26-3479222c6b72/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:13 crc kubenswrapper[4907]: I0127 19:37:13.765776 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_407bf5df-e69a-49ae-ac93-858be78d98a0/memcached/0.log" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.328092 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:40 crc kubenswrapper[4907]: E0127 19:37:40.329308 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.329326 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.329642 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.334455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.344694 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454733 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557187 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557820 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.154647 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.265826 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.928279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335010 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe" exitCode=0 Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335226 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe"} Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335357 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc"} Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.338337 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.356999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9"} Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.759896 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.964210 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.985815 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.026088 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.254610 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/extract/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.262438 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.296615 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.549525 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-nznnn_018e0dfe-5282-40d5-87db-8551645d6e02/manager/1.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.600495 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-nznnn_018e0dfe-5282-40d5-87db-8551645d6e02/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.625544 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-8jsvt_e6378a4c-96e5-4151-a0ca-c320fa9b667d/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.819876 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6lprh_277579e8-58c3-4ad7-b902-e62f045ba8c6/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.947655 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-7hgqc_a05cfe48-4bf5-4199-aefa-de59259798c4/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.122442 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-7hgqc_a05cfe48-4bf5-4199-aefa-de59259798c4/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.308144 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-4nlx7_e9f20d2f-16bf-49df-9c41-6fd6faa6ef67/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.377186 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9" exitCode=0 Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.377226 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9"} Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.430009 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-b29cj_f1ed42c6-98ac-41b8-96df-24919c0f9837/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.503607 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-b29cj_f1ed42c6-98ac-41b8-96df-24919c0f9837/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.815965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-hb2q7_c4a64f11-d6ef-487e-afa3-1d9bdbea9424/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.890699 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-mrpqf_7c6ac148-bc7a-4480-9155-8f78567a5070/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.077616 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-hb2q7_c4a64f11-d6ef-487e-afa3-1d9bdbea9424/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.139582 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-kjhgn_e257f81e-9460-4391-a7a5-cca3fc9230d9/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.306010 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mst5f_bc6ebe7e-320a-4193-8db4-3d4574ba1c3b/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.394401 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a"} Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.418692 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wl5q" podStartSLOduration=2.920042811 podStartE2EDuration="7.418671228s" podCreationTimestamp="2026-01-27 19:37:40 +0000 UTC" firstStartedPulling="2026-01-27 19:37:42.337135903 +0000 UTC m=+5517.466418515" lastFinishedPulling="2026-01-27 19:37:46.835764309 +0000 UTC m=+5521.965046932" observedRunningTime="2026-01-27 19:37:47.411836595 +0000 UTC m=+5522.541119207" watchObservedRunningTime="2026-01-27 19:37:47.418671228 +0000 UTC m=+5522.547953840" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.495738 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.558396 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mst5f_bc6ebe7e-320a-4193-8db4-3d4574ba1c3b/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.732196 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.757955 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-l2pdl_774ac09a-4164-4e22-9ea2-385ac4ef87eb/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.902800 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-l2pdl_774ac09a-4164-4e22-9ea2-385ac4ef87eb/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.114521 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fnh99_bd2d065d-dd6e-43bc-a725-e7fe52c024b1/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.179025 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fnh99_bd2d065d-dd6e-43bc-a725-e7fe52c024b1/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.341988 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-tn4d6_a733096f-e99d-4186-8542-1d8cb16012d2/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.475577 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-tn4d6_a733096f-e99d-4186-8542-1d8cb16012d2/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.539936 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9_8a6e2a40-e233-4dbe-9b63-0fecf3fc1487/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.545148 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9_8a6e2a40-e233-4dbe-9b63-0fecf3fc1487/manager/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.148412 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c754559d6-wt8dc_f22de95d-f437-432c-917a-a08c082e02c4/operator/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.219424 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c754559d6-wt8dc_f22de95d-f437-432c-917a-a08c082e02c4/operator/1.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.540862 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xc2fp_0a849662-db42-42f0-9317-eb3714b775d0/registry-server/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.696443 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-bf27l_a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2/manager/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.825057 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-mpgzf_f84f4e53-c1de-49a3-8435-5e4999a034fd/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.015791 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gfl97_a4aa00b3-8a54-4f84-907d-34a73b93944f/operator/1.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.106256 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gfl97_a4aa00b3-8a54-4f84-907d-34a73b93944f/operator/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.305582 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-fljbt_24caa967-ac26-4666-bf41-e2c4bc6ebb0f/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.551997 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ph8fw_7f5a8eee-f06b-4376-90d6-ff3faef0e8af/manager/1.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.644540 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ph8fw_7f5a8eee-f06b-4376-90d6-ff3faef0e8af/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.683552 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f954ddc5b-fjchc_7707f450-bf8d-4e84-9baa-a02bc80a0b22/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.755012 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7567458d64-vvlhm_12b8e76f-853f-4eeb-b6c5-e77d05bec357/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.861578 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-wvnrt_ba33cbc9-9a56-4c45-8c07-19b4110e03c3/manager/0.log" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.265980 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.266025 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.325472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.328478 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.387308 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.643831 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wl5q" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" containerID="cri-o://2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" gracePeriod=2 Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.659785 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" exitCode=0 Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.659873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a"} Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.660393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc"} Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.660410 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.665764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.767895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.768222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.768275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.773880 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities" (OuterVolumeSpecName: "utilities") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.788843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr" (OuterVolumeSpecName: "kube-api-access-thppr") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "kube-api-access-thppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.815162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.872780 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.873070 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.873082 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.671612 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.716048 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.734099 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.773476 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" path="/var/lib/kubelet/pods/c39e116a-bdc4-4a6a-94dd-6e1814c9532d/volumes" Jan 27 19:38:14 crc kubenswrapper[4907]: I0127 19:38:14.123766 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7v8cj_42d77196-c327-47c3-8713-d23038a08e13/control-plane-machine-set-operator/0.log" Jan 27 19:38:15 crc kubenswrapper[4907]: I0127 19:38:15.120364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-znwrp_b3e7e0e7-2f37-4998-af7c-6e5d373a1264/machine-api-operator/0.log" Jan 27 19:38:15 crc kubenswrapper[4907]: I0127 19:38:15.141456 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-znwrp_b3e7e0e7-2f37-4998-af7c-6e5d373a1264/kube-rbac-proxy/0.log" Jan 27 19:38:26 crc kubenswrapper[4907]: I0127 19:38:26.520946 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:38:26 crc kubenswrapper[4907]: I0127 19:38:26.521653 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:38:28 crc kubenswrapper[4907]: I0127 19:38:28.604643 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jslkq_1fa35228-e301-48b5-b17b-21694e61ef16/cert-manager-controller/0.log" Jan 27 19:38:29 crc kubenswrapper[4907]: I0127 19:38:29.619036 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-58hmb_19be711f-36d9-46ae-8f7a-fdba490484da/cert-manager-cainjector/0.log" Jan 27 19:38:29 crc kubenswrapper[4907]: I0127 19:38:29.643367 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jfhbt_53565dd2-5a29-4ba0-9654-36b9600f765b/cert-manager-webhook/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.586096 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-rhr2w_d3336bb0-ef0d-47f3-b3c7-de266154f20e/nmstate-console-plugin/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.773179 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wz5df_0b5adf10-ea9c-48b5-bece-3ee8683423e3/nmstate-handler/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.825293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-f7vbh_eeb93cd2-3631-4fad-a0d1-01232bbf9202/kube-rbac-proxy/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.888294 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-f7vbh_eeb93cd2-3631-4fad-a0d1-01232bbf9202/nmstate-metrics/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.995197 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-j277h_a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b/nmstate-operator/0.log" Jan 27 19:38:43 crc kubenswrapper[4907]: I0127 19:38:43.147259 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5q5h2_c53f2859-15de-4c57-81ba-539c7787b649/nmstate-webhook/0.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.424508 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/kube-rbac-proxy/0.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.432712 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/1.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.521528 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.521602 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.637834 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/0.log" Jan 27 19:39:12 crc kubenswrapper[4907]: I0127 19:39:12.927908 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s_91eb4541-31f7-488a-ae31-d57bfa265442/prometheus-operator-admission-webhook/0.log" Jan 27 19:39:12 crc kubenswrapper[4907]: I0127 19:39:12.977410 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k7sff_d68ab367-2841-460c-b666-5b52ec455dd2/prometheus-operator/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.186285 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh_c1a068f6-1c40-4947-b9bd-3b018ddcb25b/prometheus-operator-admission-webhook/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.253907 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/1.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.462284 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.504819 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-s824m_6ccb4875-977f-4fea-b3fa-8a4e4ba5a874/observability-ui-dashboards/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.683774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-65v8r_99183c02-34c0-4a91-9e6e-0efd5d2a7a42/perses-operator/0.log" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521253 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521889 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521956 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.523267 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.523362 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" gracePeriod=600 Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.611958 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" exitCode=0 Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612473 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612500 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612517 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.239303 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-t7bh6_1f119aff-6ff6-4393-b7d5-19a981e50f3c/cluster-logging-operator/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.457468 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-2bmhz_e66fb20d-fb54-4964-9fb8-0ca14b94f895/collector/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.500796 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_2448dad5-d0f7-4335-a3fb-a23c5ef59bbf/loki-compactor/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.670851 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-zhq64_bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542/loki-distributor/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.724110 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-mwm5k_d57b015c-f3fc-424d-b910-96e63c6da31a/gateway/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.834982 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-mwm5k_d57b015c-f3fc-424d-b910-96e63c6da31a/opa/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.948755 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-njxl9_faf9da31-9bbb-43b4-9cc1-a80f95392ccf/opa/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.961807 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-njxl9_faf9da31-9bbb-43b4-9cc1-a80f95392ccf/gateway/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.151526 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_a9dc6389-0ad3-4259-aaf2-945493e66aa2/loki-index-gateway/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.194449 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_30b4b16e-4eff-46be-aac5-63d2b3d8fdf2/loki-ingester/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.383199 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-r2fdr_8f62d8a1-62d1-4206-b061-f75c44ff2450/loki-querier/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.444585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-4ngf2_70874c1f-da0d-4389-8021-fd3003150fff/loki-query-frontend/0.log" Jan 27 19:39:48 crc kubenswrapper[4907]: I0127 19:39:48.098503 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zfszb_2ea123ce-4328-4379-8310-dbfff15acfbf/kube-rbac-proxy/0.log" Jan 27 19:39:48 crc kubenswrapper[4907]: I0127 19:39:48.253272 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zfszb_2ea123ce-4328-4379-8310-dbfff15acfbf/controller/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.200425 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.468573 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.484707 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.485589 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.530364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.734758 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.741579 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.784084 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.837756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.001936 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.010917 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.074402 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.089851 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/controller/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.253874 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr-metrics/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.306297 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/kube-rbac-proxy/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.357717 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr/1.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.564659 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/kube-rbac-proxy-frr/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.606314 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.796800 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-n9qqt_dd967d05-2ecd-4578-9c41-22e36ff088c1/frr-k8s-webhook-server/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.828731 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6858498495-rcqbh_9a776a10-0883-468e-a8d3-087ca6429b1b/manager/1.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.103896 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6858498495-rcqbh_9a776a10-0883-468e-a8d3-087ca6429b1b/manager/0.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.137473 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-548b7f8fd-7zpsk_202ff14a-3733-4ccf-8202-94fac75bdfc4/webhook-server/0.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.332708 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-597cv_aa958bdc-32c5-4e9f-841e-7427fdb87b31/kube-rbac-proxy/0.log" Jan 27 19:39:52 crc kubenswrapper[4907]: I0127 19:39:52.056583 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-597cv_aa958bdc-32c5-4e9f-841e-7427fdb87b31/speaker/0.log" Jan 27 19:39:52 crc kubenswrapper[4907]: I0127 19:39:52.142773 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.050943 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.376453 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.394450 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.398777 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.615846 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.665041 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/extract/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.669771 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.837229 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.642667 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.663300 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.675998 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.917869 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.954653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.977311 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/extract/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.164113 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.407515 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.442364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.465448 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.659652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.690824 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/extract/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.725022 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.904506 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.089491 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.101928 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.131282 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.609771 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.668951 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/extract/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.670742 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.802901 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.043893 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.077397 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.077795 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.287149 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/extract/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.307756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.334701 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.409032 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.587235 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.589717 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.600617 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.797887 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.810343 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.845268 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.165221 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.194601 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.226572 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.614745 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.641670 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.664188 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/registry-server/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.840772 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-87z2b_5564598e-ff23-4f9e-b3de-64e127e94da6/marketplace-operator/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.873911 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.164743 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.198000 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.208308 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.487214 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.487360 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.705966 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.874913 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/registry-server/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.926167 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/registry-server/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.960732 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.009185 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.024544 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.195045 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.231430 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:15 crc kubenswrapper[4907]: I0127 19:40:15.178965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/registry-server/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.732807 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s_91eb4541-31f7-488a-ae31-d57bfa265442/prometheus-operator-admission-webhook/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.786962 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh_c1a068f6-1c40-4947-b9bd-3b018ddcb25b/prometheus-operator-admission-webhook/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.798954 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k7sff_d68ab367-2841-460c-b666-5b52ec455dd2/prometheus-operator/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.973685 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/1.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.980904 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-s824m_6ccb4875-977f-4fea-b3fa-8a4e4ba5a874/observability-ui-dashboards/0.log" Jan 27 19:40:29 crc kubenswrapper[4907]: I0127 19:40:29.022235 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/0.log" Jan 27 19:40:29 crc kubenswrapper[4907]: I0127 19:40:29.060317 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-65v8r_99183c02-34c0-4a91-9e6e-0efd5d2a7a42/perses-operator/0.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.555229 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/kube-rbac-proxy/0.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.682652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/1.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.713127 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/0.log" Jan 27 19:41:11 crc kubenswrapper[4907]: E0127 19:41:11.256341 4907 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:40824->38.102.83.184:45697: write tcp 38.102.83.184:40824->38.102.83.184:45697: write: broken pipe Jan 27 19:41:26 crc kubenswrapper[4907]: I0127 19:41:26.521113 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:41:26 crc kubenswrapper[4907]: I0127 19:41:26.521663 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:41:56 crc kubenswrapper[4907]: I0127 19:41:56.521347 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:41:56 crc kubenswrapper[4907]: I0127 19:41:56.522021 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:03 crc kubenswrapper[4907]: I0127 19:42:03.546975 4907 scope.go:117] "RemoveContainer" containerID="4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.664091 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665594 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-content" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665615 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-content" Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665655 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665663 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665677 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-utilities" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665686 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-utilities" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.666004 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.668278 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.695786 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835879 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.941413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.941943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.942028 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.943833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.944090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:06 crc kubenswrapper[4907]: I0127 19:42:06.055547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:06 crc kubenswrapper[4907]: I0127 19:42:06.295686 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:07 crc kubenswrapper[4907]: I0127 19:42:07.316699 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:07 crc kubenswrapper[4907]: I0127 19:42:07.434079 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"b1a93475143b5e6aaa1f41b20b5f1ca3e9c28fb27e98f77735e4e68a0d1e4648"} Jan 27 19:42:08 crc kubenswrapper[4907]: I0127 19:42:08.449875 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c" exitCode=0 Jan 27 19:42:08 crc kubenswrapper[4907]: I0127 19:42:08.449928 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c"} Jan 27 19:42:08 crc kubenswrapper[4907]: E0127 19:42:08.539696 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b177ad6_c99a_4f62_8fd6_c223bc910e39.slice/crio-3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:42:09 crc kubenswrapper[4907]: I0127 19:42:09.464410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0"} Jan 27 19:42:11 crc kubenswrapper[4907]: I0127 19:42:11.487864 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0" exitCode=0 Jan 27 19:42:11 crc kubenswrapper[4907]: I0127 19:42:11.487911 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0"} Jan 27 19:42:12 crc kubenswrapper[4907]: I0127 19:42:12.522485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15"} Jan 27 19:42:12 crc kubenswrapper[4907]: I0127 19:42:12.560121 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlkk7" podStartSLOduration=4.146293071 podStartE2EDuration="7.560096819s" podCreationTimestamp="2026-01-27 19:42:05 +0000 UTC" firstStartedPulling="2026-01-27 19:42:08.451824579 +0000 UTC m=+5783.581107201" lastFinishedPulling="2026-01-27 19:42:11.865628337 +0000 UTC m=+5786.994910949" observedRunningTime="2026-01-27 19:42:12.541502053 +0000 UTC m=+5787.670784665" watchObservedRunningTime="2026-01-27 19:42:12.560096819 +0000 UTC m=+5787.689379431" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.296484 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.298690 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.412145 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:18 crc kubenswrapper[4907]: I0127 19:42:18.012495 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:18 crc kubenswrapper[4907]: I0127 19:42:18.071542 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:19 crc kubenswrapper[4907]: I0127 19:42:19.641792 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlkk7" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" containerID="cri-o://5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" gracePeriod=2 Jan 27 19:42:20 crc kubenswrapper[4907]: I0127 19:42:20.670675 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" exitCode=0 Jan 27 19:42:20 crc kubenswrapper[4907]: I0127 19:42:20.671021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15"} Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.383991 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.479833 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.480323 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.480374 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.481196 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities" (OuterVolumeSpecName: "utilities") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.493105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw" (OuterVolumeSpecName: "kube-api-access-nwgbw") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "kube-api-access-nwgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.551801 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584851 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584903 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584919 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.684941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"b1a93475143b5e6aaa1f41b20b5f1ca3e9c28fb27e98f77735e4e68a0d1e4648"} Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.684994 4907 scope.go:117] "RemoveContainer" containerID="5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.685045 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.715238 4907 scope.go:117] "RemoveContainer" containerID="4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.760060 4907 scope.go:117] "RemoveContainer" containerID="3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.804029 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.807543 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:23 crc kubenswrapper[4907]: I0127 19:42:23.771153 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" path="/var/lib/kubelet/pods/4b177ad6-c99a-4f62-8fd6-c223bc910e39/volumes" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.520996 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.521724 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.521780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.522877 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.522940 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" gracePeriod=600 Jan 27 19:42:26 crc kubenswrapper[4907]: E0127 19:42:26.641863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746054 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" exitCode=0 Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746086 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746121 4907 scope.go:117] "RemoveContainer" containerID="54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746546 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:26 crc kubenswrapper[4907]: E0127 19:42:26.746868 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:38 crc kubenswrapper[4907]: I0127 19:42:38.748578 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:38 crc kubenswrapper[4907]: E0127 19:42:38.749377 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:49 crc kubenswrapper[4907]: I0127 19:42:49.749207 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:49 crc kubenswrapper[4907]: E0127 19:42:49.750027 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.103069 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" exitCode=0 Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.103471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerDied","Data":"7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538"} Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.104109 4907 scope.go:117] "RemoveContainer" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.400505 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/gather/0.log" Jan 27 19:43:01 crc kubenswrapper[4907]: I0127 19:43:01.748405 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:01 crc kubenswrapper[4907]: E0127 19:43:01.749191 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:03 crc kubenswrapper[4907]: I0127 19:43:03.637802 4907 scope.go:117] "RemoveContainer" containerID="7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c" Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.771368 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.772400 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cl48g/must-gather-s7n67" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" containerID="cri-o://fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" gracePeriod=2 Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.785719 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.215093 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.215720 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerID="fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" exitCode=143 Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.534061 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.534764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.644892 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.645117 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.653282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx" (OuterVolumeSpecName: "kube-api-access-phssx") pod "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" (UID: "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61"). InnerVolumeSpecName "kube-api-access-phssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.755161 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.929393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" (UID: "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.959266 4907 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.227930 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.228430 4907 scope.go:117] "RemoveContainer" containerID="fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.228471 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.264577 4907 scope.go:117] "RemoveContainer" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.761012 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" path="/var/lib/kubelet/pods/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/volumes" Jan 27 19:43:15 crc kubenswrapper[4907]: I0127 19:43:15.757364 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:15 crc kubenswrapper[4907]: E0127 19:43:15.758670 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:30 crc kubenswrapper[4907]: I0127 19:43:30.749169 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:30 crc kubenswrapper[4907]: E0127 19:43:30.751378 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:41 crc kubenswrapper[4907]: I0127 19:43:41.750001 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:41 crc kubenswrapper[4907]: E0127 19:43:41.750921 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:54 crc kubenswrapper[4907]: I0127 19:43:54.748978 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:54 crc kubenswrapper[4907]: E0127 19:43:54.750429 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.755862 4907 scope.go:117] "RemoveContainer" containerID="2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.789736 4907 scope.go:117] "RemoveContainer" containerID="0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.821218 4907 scope.go:117] "RemoveContainer" containerID="9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe" Jan 27 19:44:06 crc kubenswrapper[4907]: I0127 19:44:06.749697 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:06 crc kubenswrapper[4907]: E0127 19:44:06.751405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:20 crc kubenswrapper[4907]: I0127 19:44:20.749197 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:20 crc kubenswrapper[4907]: E0127 19:44:20.750261 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:32 crc kubenswrapper[4907]: I0127 19:44:32.749071 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:32 crc kubenswrapper[4907]: E0127 19:44:32.750510 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.561647 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562362 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-utilities" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562398 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-utilities" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562437 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-content" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562450 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-content" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562480 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562510 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562548 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562597 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562610 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562998 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.563031 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.563106 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.566690 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.596304 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.698856 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.699227 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.699269 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801197 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801787 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.822725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.895264 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:34 crc kubenswrapper[4907]: I0127 19:44:34.456613 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.314863 4907 generic.go:334] "Generic (PLEG): container finished" podID="b700555d-4c61-4c37-9536-b4656d126ac4" containerID="aa0972d1fee961820af9ec68cf0ff0330732cc70ca944e082e27b2c6aa99bd25" exitCode=0 Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.314989 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerDied","Data":"aa0972d1fee961820af9ec68cf0ff0330732cc70ca944e082e27b2c6aa99bd25"} Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.315201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"23e76a11773b360e0fc49bf6a99054330ebd902d5638be83ff86a491578fd201"} Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.316896 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:44:41 crc kubenswrapper[4907]: I0127 19:44:41.382924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494"} Jan 27 19:44:42 crc kubenswrapper[4907]: I0127 19:44:42.394499 4907 generic.go:334] "Generic (PLEG): container finished" podID="b700555d-4c61-4c37-9536-b4656d126ac4" containerID="e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494" exitCode=0 Jan 27 19:44:42 crc kubenswrapper[4907]: I0127 19:44:42.394590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerDied","Data":"e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494"} Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.413533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"5c02b65095acabde9976dba4a61ead897b760db462b0ca410ddae2af84e50437"} Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.443096 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hj4zw" podStartSLOduration=2.890748245 podStartE2EDuration="10.443077258s" podCreationTimestamp="2026-01-27 19:44:33 +0000 UTC" firstStartedPulling="2026-01-27 19:44:35.316625887 +0000 UTC m=+5930.445908499" lastFinishedPulling="2026-01-27 19:44:42.8689549 +0000 UTC m=+5937.998237512" observedRunningTime="2026-01-27 19:44:43.432292383 +0000 UTC m=+5938.561574995" watchObservedRunningTime="2026-01-27 19:44:43.443077258 +0000 UTC m=+5938.572359870" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.748823 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:43 crc kubenswrapper[4907]: E0127 19:44:43.749288 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.896217 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.896279 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:44 crc kubenswrapper[4907]: I0127 19:44:44.957274 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hj4zw" podUID="b700555d-4c61-4c37-9536-b4656d126ac4" containerName="registry-server" probeResult="failure" output=< Jan 27 19:44:44 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:44:44 crc kubenswrapper[4907]: > Jan 27 19:44:53 crc kubenswrapper[4907]: I0127 19:44:53.950854 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.028811 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.130051 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.227856 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.228199 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" containerID="cri-o://e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" gracePeriod=2 Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.578187 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" exitCode=0 Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.579751 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc"} Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.796698 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879188 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879327 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.901748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities" (OuterVolumeSpecName: "utilities") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.902005 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx" (OuterVolumeSpecName: "kube-api-access-42ntx") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "kube-api-access-42ntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.961313 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.982686 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.982998 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.983011 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.593471 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.602275 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5"} Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.602332 4907 scope.go:117] "RemoveContainer" containerID="e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.632735 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.636690 4907 scope.go:117] "RemoveContainer" containerID="9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.646903 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.681943 4907 scope.go:117] "RemoveContainer" containerID="74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.781342 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" path="/var/lib/kubelet/pods/bae6221e-526b-4cc4-9f9b-1079238c9100/volumes" Jan 27 19:44:56 crc kubenswrapper[4907]: I0127 19:44:56.748739 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:56 crc kubenswrapper[4907]: E0127 19:44:56.749619 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.202951 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204158 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204173 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204204 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204212 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204228 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204237 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204618 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.205651 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.219668 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.228248 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.228842 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323768 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426741 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426872 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.427815 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.444421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.453455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.539746 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.025981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.667905 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerStarted","Data":"ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a"} Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.668310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerStarted","Data":"45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06"} Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.709027 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" podStartSLOduration=1.709004367 podStartE2EDuration="1.709004367s" podCreationTimestamp="2026-01-27 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:45:01.695550897 +0000 UTC m=+5956.824833519" watchObservedRunningTime="2026-01-27 19:45:01.709004367 +0000 UTC m=+5956.838286999" Jan 27 19:45:02 crc kubenswrapper[4907]: I0127 19:45:02.684929 4907 generic.go:334] "Generic (PLEG): container finished" podID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerID="ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a" exitCode=0 Jan 27 19:45:02 crc kubenswrapper[4907]: I0127 19:45:02.685426 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerDied","Data":"ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a"} Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.117601 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.241670 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.241949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.242038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.242679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.243028 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.248999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.249027 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9" (OuterVolumeSpecName: "kube-api-access-h9cg9") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "kube-api-access-h9cg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.345227 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.345518 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.721931 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerDied","Data":"45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06"} Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.721969 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.722015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.773522 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.785445 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:45:05 crc kubenswrapper[4907]: I0127 19:45:05.766628 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" path="/var/lib/kubelet/pods/2b8f8a81-de05-4458-b8bc-4031caa5a02c/volumes" Jan 27 19:45:10 crc kubenswrapper[4907]: I0127 19:45:10.748414 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:10 crc kubenswrapper[4907]: E0127 19:45:10.749533 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:22 crc kubenswrapper[4907]: I0127 19:45:22.748326 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:22 crc kubenswrapper[4907]: E0127 19:45:22.749452 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:35 crc kubenswrapper[4907]: I0127 19:45:35.749748 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:35 crc kubenswrapper[4907]: E0127 19:45:35.750805 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:50 crc kubenswrapper[4907]: I0127 19:45:50.748389 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:50 crc kubenswrapper[4907]: E0127 19:45:50.749348 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:03 crc kubenswrapper[4907]: I0127 19:46:03.995358 4907 scope.go:117] "RemoveContainer" containerID="730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08" Jan 27 19:46:04 crc kubenswrapper[4907]: I0127 19:46:04.748102 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:04 crc kubenswrapper[4907]: E0127 19:46:04.748728 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.666087 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:06 crc kubenswrapper[4907]: E0127 19:46:06.668046 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.668174 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.668631 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.671513 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.689302 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.751723 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.754920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.755324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858573 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.859120 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.859713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.888178 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:07 crc kubenswrapper[4907]: I0127 19:46:07.011154 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:07 crc kubenswrapper[4907]: I0127 19:46:07.539041 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493005 4907 generic.go:334] "Generic (PLEG): container finished" podID="375d7447-f391-4503-b44d-738db6a38564" containerID="3abb88824085ee899bccbe1ed3e38c0cc719189897fcb43aeba249ef706f0fc6" exitCode=0 Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"3abb88824085ee899bccbe1ed3e38c0cc719189897fcb43aeba249ef706f0fc6"} Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"16f74ed1ee4b667af4e334ece97cf14d7e101982b54a7105876827cd8c90beda"} Jan 27 19:46:10 crc kubenswrapper[4907]: I0127 19:46:10.519752 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7"} Jan 27 19:46:17 crc kubenswrapper[4907]: I0127 19:46:17.603716 4907 generic.go:334] "Generic (PLEG): container finished" podID="375d7447-f391-4503-b44d-738db6a38564" containerID="e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7" exitCode=0 Jan 27 19:46:17 crc kubenswrapper[4907]: I0127 19:46:17.603823 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7"} Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.615655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d"} Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.639656 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p64zx" podStartSLOduration=2.881088512 podStartE2EDuration="12.639633473s" podCreationTimestamp="2026-01-27 19:46:06 +0000 UTC" firstStartedPulling="2026-01-27 19:46:08.496264465 +0000 UTC m=+6023.625547077" lastFinishedPulling="2026-01-27 19:46:18.254809436 +0000 UTC m=+6033.384092038" observedRunningTime="2026-01-27 19:46:18.634380275 +0000 UTC m=+6033.763662907" watchObservedRunningTime="2026-01-27 19:46:18.639633473 +0000 UTC m=+6033.768916085" Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.748703 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:18 crc kubenswrapper[4907]: E0127 19:46:18.749203 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:27 crc kubenswrapper[4907]: I0127 19:46:27.011889 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:27 crc kubenswrapper[4907]: I0127 19:46:27.012527 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:28 crc kubenswrapper[4907]: I0127 19:46:28.069580 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:28 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:28 crc kubenswrapper[4907]: > Jan 27 19:46:33 crc kubenswrapper[4907]: I0127 19:46:33.748840 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:33 crc kubenswrapper[4907]: E0127 19:46:33.749514 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:38 crc kubenswrapper[4907]: I0127 19:46:38.067267 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:38 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:38 crc kubenswrapper[4907]: > Jan 27 19:46:44 crc kubenswrapper[4907]: I0127 19:46:44.748412 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:44 crc kubenswrapper[4907]: E0127 19:46:44.749868 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:48 crc kubenswrapper[4907]: I0127 19:46:48.092216 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:48 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:48 crc kubenswrapper[4907]: > Jan 27 19:46:55 crc kubenswrapper[4907]: I0127 19:46:55.789682 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:55 crc kubenswrapper[4907]: E0127 19:46:55.790995 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:58 crc kubenswrapper[4907]: I0127 19:46:58.067403 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:58 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:58 crc kubenswrapper[4907]: > Jan 27 19:47:07 crc kubenswrapper[4907]: I0127 19:47:07.749177 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:47:07 crc kubenswrapper[4907]: E0127 19:47:07.750160 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:47:08 crc kubenswrapper[4907]: I0127 19:47:08.071118 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:47:08 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:47:08 crc kubenswrapper[4907]: > Jan 27 19:47:18 crc kubenswrapper[4907]: I0127 19:47:18.085106 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:47:18 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:47:18 crc kubenswrapper[4907]: > Jan 27 19:47:21 crc kubenswrapper[4907]: I0127 19:47:21.748419 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:47:21 crc kubenswrapper[4907]: E0127 19:47:21.749604 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:47:27 crc kubenswrapper[4907]: I0127 19:47:27.093307 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:47:27 crc kubenswrapper[4907]: I0127 19:47:27.153741 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:47:27 crc kubenswrapper[4907]: I0127 19:47:27.333289 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:47:28 crc kubenswrapper[4907]: I0127 19:47:28.206464 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" containerID="cri-o://aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d" gracePeriod=2 Jan 27 19:47:29 crc kubenswrapper[4907]: I0127 19:47:29.222900 4907 generic.go:334] "Generic (PLEG): container finished" podID="375d7447-f391-4503-b44d-738db6a38564" containerID="aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d" exitCode=0 Jan 27 19:47:29 crc kubenswrapper[4907]: I0127 19:47:29.222987 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d"} Jan 27 19:47:29 crc kubenswrapper[4907]: I0127 19:47:29.898092 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:47:29 crc kubenswrapper[4907]: I0127 19:47:29.947118 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"375d7447-f391-4503-b44d-738db6a38564\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.054022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"375d7447-f391-4503-b44d-738db6a38564\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.054134 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"375d7447-f391-4503-b44d-738db6a38564\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.055059 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities" (OuterVolumeSpecName: "utilities") pod "375d7447-f391-4503-b44d-738db6a38564" (UID: "375d7447-f391-4503-b44d-738db6a38564"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.067964 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5" (OuterVolumeSpecName: "kube-api-access-99fk5") pod "375d7447-f391-4503-b44d-738db6a38564" (UID: "375d7447-f391-4503-b44d-738db6a38564"). InnerVolumeSpecName "kube-api-access-99fk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.090353 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "375d7447-f391-4503-b44d-738db6a38564" (UID: "375d7447-f391-4503-b44d-738db6a38564"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.157671 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") on node \"crc\" DevicePath \"\"" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.157719 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.157737 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.236931 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"16f74ed1ee4b667af4e334ece97cf14d7e101982b54a7105876827cd8c90beda"} Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.237016 4907 scope.go:117] "RemoveContainer" containerID="aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.237239 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.281302 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.294184 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.310230 4907 scope.go:117] "RemoveContainer" containerID="e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7" Jan 27 19:47:30 crc kubenswrapper[4907]: I0127 19:47:30.335051 4907 scope.go:117] "RemoveContainer" containerID="3abb88824085ee899bccbe1ed3e38c0cc719189897fcb43aeba249ef706f0fc6" Jan 27 19:47:31 crc kubenswrapper[4907]: I0127 19:47:31.761216 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375d7447-f391-4503-b44d-738db6a38564" path="/var/lib/kubelet/pods/375d7447-f391-4503-b44d-738db6a38564/volumes" Jan 27 19:47:33 crc kubenswrapper[4907]: I0127 19:47:33.748842 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:47:34 crc kubenswrapper[4907]: I0127 19:47:34.289020 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"1f5249c5c51085281b5ae73cfa8776ce5a74a94c14261828692ca0c3c057f114"} Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.098777 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:03 crc kubenswrapper[4907]: E0127 19:48:03.099957 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.099975 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" Jan 27 19:48:03 crc kubenswrapper[4907]: E0127 19:48:03.099994 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="extract-utilities" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.100003 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="extract-utilities" Jan 27 19:48:03 crc kubenswrapper[4907]: E0127 19:48:03.100021 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="extract-content" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.100029 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="extract-content" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.100337 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.102832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.145277 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.185701 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9l6\" (UniqueName: \"kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.185906 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.186058 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.288019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.288158 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9l6\" (UniqueName: \"kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.288602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.288821 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.289418 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.329785 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9l6\" (UniqueName: \"kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6\") pod \"redhat-marketplace-nrfbq\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:03 crc kubenswrapper[4907]: I0127 19:48:03.429148 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:04 crc kubenswrapper[4907]: I0127 19:48:04.486838 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:04 crc kubenswrapper[4907]: I0127 19:48:04.634485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerStarted","Data":"a5af8a2196892af21f59d53c29c7c5fb982d26c97df4b409170d7e6f6bfa1c4c"} Jan 27 19:48:04 crc kubenswrapper[4907]: W0127 19:48:04.834266 4907 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025ead0_4a5d_44fa_8320_a6240d43ad66.slice/crio-conmon-fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59.scope/memory.min": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025ead0_4a5d_44fa_8320_a6240d43ad66.slice/crio-conmon-fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59.scope/memory.min: no such device Jan 27 19:48:04 crc kubenswrapper[4907]: E0127 19:48:04.836064 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025ead0_4a5d_44fa_8320_a6240d43ad66.slice/crio-conmon-fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025ead0_4a5d_44fa_8320_a6240d43ad66.slice/crio-fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:48:05 crc kubenswrapper[4907]: I0127 19:48:05.649066 4907 generic.go:334] "Generic (PLEG): container finished" podID="3025ead0-4a5d-44fa-8320-a6240d43ad66" containerID="fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59" exitCode=0 Jan 27 19:48:05 crc kubenswrapper[4907]: I0127 19:48:05.649412 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerDied","Data":"fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59"} Jan 27 19:48:07 crc kubenswrapper[4907]: I0127 19:48:07.669577 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerStarted","Data":"27d5a4941c655a6a0c7170fc8e39570a1786e8ff230ec8ccb05464fbe163038d"} Jan 27 19:48:10 crc kubenswrapper[4907]: I0127 19:48:10.704493 4907 generic.go:334] "Generic (PLEG): container finished" podID="3025ead0-4a5d-44fa-8320-a6240d43ad66" containerID="27d5a4941c655a6a0c7170fc8e39570a1786e8ff230ec8ccb05464fbe163038d" exitCode=0 Jan 27 19:48:10 crc kubenswrapper[4907]: I0127 19:48:10.704646 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerDied","Data":"27d5a4941c655a6a0c7170fc8e39570a1786e8ff230ec8ccb05464fbe163038d"} Jan 27 19:48:12 crc kubenswrapper[4907]: I0127 19:48:12.729456 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerStarted","Data":"3a7d540ae0725c42debc873bccfc46325db783ea34b0ba1518cb3760561712bc"} Jan 27 19:48:13 crc kubenswrapper[4907]: I0127 19:48:13.430060 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:13 crc kubenswrapper[4907]: I0127 19:48:13.430931 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:14 crc kubenswrapper[4907]: I0127 19:48:14.495242 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nrfbq" podUID="3025ead0-4a5d-44fa-8320-a6240d43ad66" containerName="registry-server" probeResult="failure" output=< Jan 27 19:48:14 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:48:14 crc kubenswrapper[4907]: > Jan 27 19:48:23 crc kubenswrapper[4907]: I0127 19:48:23.482684 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:23 crc kubenswrapper[4907]: I0127 19:48:23.504587 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nrfbq" podStartSLOduration=13.723239884 podStartE2EDuration="20.504548336s" podCreationTimestamp="2026-01-27 19:48:03 +0000 UTC" firstStartedPulling="2026-01-27 19:48:05.651635949 +0000 UTC m=+6140.780918561" lastFinishedPulling="2026-01-27 19:48:12.432944401 +0000 UTC m=+6147.562227013" observedRunningTime="2026-01-27 19:48:12.757049082 +0000 UTC m=+6147.886331704" watchObservedRunningTime="2026-01-27 19:48:23.504548336 +0000 UTC m=+6158.633830948" Jan 27 19:48:23 crc kubenswrapper[4907]: I0127 19:48:23.543434 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:23 crc kubenswrapper[4907]: I0127 19:48:23.720717 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:24 crc kubenswrapper[4907]: I0127 19:48:24.877974 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nrfbq" podUID="3025ead0-4a5d-44fa-8320-a6240d43ad66" containerName="registry-server" containerID="cri-o://3a7d540ae0725c42debc873bccfc46325db783ea34b0ba1518cb3760561712bc" gracePeriod=2 Jan 27 19:48:25 crc kubenswrapper[4907]: I0127 19:48:25.889255 4907 generic.go:334] "Generic (PLEG): container finished" podID="3025ead0-4a5d-44fa-8320-a6240d43ad66" containerID="3a7d540ae0725c42debc873bccfc46325db783ea34b0ba1518cb3760561712bc" exitCode=0 Jan 27 19:48:25 crc kubenswrapper[4907]: I0127 19:48:25.889600 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerDied","Data":"3a7d540ae0725c42debc873bccfc46325db783ea34b0ba1518cb3760561712bc"} Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.011388 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.166472 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx9l6\" (UniqueName: \"kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6\") pod \"3025ead0-4a5d-44fa-8320-a6240d43ad66\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.166824 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities\") pod \"3025ead0-4a5d-44fa-8320-a6240d43ad66\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.167019 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content\") pod \"3025ead0-4a5d-44fa-8320-a6240d43ad66\" (UID: \"3025ead0-4a5d-44fa-8320-a6240d43ad66\") " Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.167624 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities" (OuterVolumeSpecName: "utilities") pod "3025ead0-4a5d-44fa-8320-a6240d43ad66" (UID: "3025ead0-4a5d-44fa-8320-a6240d43ad66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.168469 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.175840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6" (OuterVolumeSpecName: "kube-api-access-zx9l6") pod "3025ead0-4a5d-44fa-8320-a6240d43ad66" (UID: "3025ead0-4a5d-44fa-8320-a6240d43ad66"). InnerVolumeSpecName "kube-api-access-zx9l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.189724 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3025ead0-4a5d-44fa-8320-a6240d43ad66" (UID: "3025ead0-4a5d-44fa-8320-a6240d43ad66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.270804 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3025ead0-4a5d-44fa-8320-a6240d43ad66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.270836 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx9l6\" (UniqueName: \"kubernetes.io/projected/3025ead0-4a5d-44fa-8320-a6240d43ad66-kube-api-access-zx9l6\") on node \"crc\" DevicePath \"\"" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.902644 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrfbq" event={"ID":"3025ead0-4a5d-44fa-8320-a6240d43ad66","Type":"ContainerDied","Data":"a5af8a2196892af21f59d53c29c7c5fb982d26c97df4b409170d7e6f6bfa1c4c"} Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.902735 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrfbq" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.902953 4907 scope.go:117] "RemoveContainer" containerID="3a7d540ae0725c42debc873bccfc46325db783ea34b0ba1518cb3760561712bc" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.927254 4907 scope.go:117] "RemoveContainer" containerID="27d5a4941c655a6a0c7170fc8e39570a1786e8ff230ec8ccb05464fbe163038d" Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.945824 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.961883 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrfbq"] Jan 27 19:48:26 crc kubenswrapper[4907]: I0127 19:48:26.968216 4907 scope.go:117] "RemoveContainer" containerID="fe8776fe7577fd42de275069684082a69fa9350a165fbed192fb339fb98b9e59" Jan 27 19:48:27 crc kubenswrapper[4907]: I0127 19:48:27.762385 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3025ead0-4a5d-44fa-8320-a6240d43ad66" path="/var/lib/kubelet/pods/3025ead0-4a5d-44fa-8320-a6240d43ad66/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136213254024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136213254017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136176662016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136176662015472 5ustar corecore